text
stringlengths
3
7.31k
source
stringclasses
40 values
url
stringlengths
53
184
source_section
stringlengths
0
105
file_type
stringclasses
1 value
id
stringlengths
3
6
Here is what your app would look like. On startup, check if a task is scheduled and if yes, run it on the correct hardware. Once done, set back hardware to the free-plan CPU and prompt the user for a new task. <Tip warning={true}> Such a workflow does not support concurrent access as normal demos. In particular, the interface will be disabled when training occurs. It is preferable to set your repo as private to ensure you are the only user. </Tip> ```py # Space will need your token to request hardware: set it as a Secret ! HF_TOKEN = os.environ.get("HF_TOKEN") # Space own repo_id TRAINING_SPACE_ID = "Wauplin/dreambooth-training" from huggingface_hub import HfApi, SpaceHardware api = HfApi(token=HF_TOKEN) # On Space startup, check if a task is scheduled. If yes, finetune the model. If not, # display an interface to request a new task. task = get_task() if task is None: # Start Gradio app def gradio_fn(task): # On user request, add task and request hardware add_task(task) api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.T4_MEDIUM) gr.Interface(fn=gradio_fn, ...).launch() else: runtime = api.get_space_runtime(repo_id=TRAINING_SPACE_ID) # Check if Space is loaded with a GPU. if runtime.hardware == SpaceHardware.T4_MEDIUM: # If yes, finetune base model on dataset ! train_and_upload(task) # Then, mark the task as "DONE" mark_as_done(task) # DO NOT FORGET: set back CPU hardware api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.CPU_BASIC) else: api.request_space_hardware(repo_id=TRAINING_SPACE_ID, hardware=SpaceHardware.T4_MEDIUM) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-spaces.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-spaces/#app-skeleton
#app-skeleton
.md
34_4
Scheduling tasks can be done in many ways. Here is an example how it could be done using a simple CSV stored as a Dataset. ```py # Dataset ID in which a `tasks.csv` file contains the tasks to perform. # Here is a basic example for `tasks.csv` containing inputs (base model and dataset) # and status (PENDING or DONE). # multimodalart/sd-fine-tunable,Wauplin/concept-1,DONE # multimodalart/sd-fine-tunable,Wauplin/concept-2,PENDING TASK_DATASET_ID = "Wauplin/dreambooth-task-scheduler" def _get_csv_file(): return hf_hub_download(repo_id=TASK_DATASET_ID, filename="tasks.csv", repo_type="dataset", token=HF_TOKEN) def get_task(): with open(_get_csv_file()) as csv_file: csv_reader = csv.reader(csv_file, delimiter=',') for row in csv_reader: if row[2] == "PENDING": return row[0], row[1] # model_id, dataset_id def add_task(task): model_id, dataset_id = task with open(_get_csv_file()) as csv_file: with open(csv_file, "r") as f: tasks = f.read() api.upload_file( repo_id=repo_id, repo_type=repo_type, path_in_repo="tasks.csv", # Quick and dirty way to add a task path_or_fileobj=(tasks + f"\n{model_id},{dataset_id},PENDING").encode() ) def mark_as_done(task): model_id, dataset_id = task with open(_get_csv_file()) as csv_file: with open(csv_file, "r") as f: tasks = f.read() api.upload_file( repo_id=repo_id, repo_type=repo_type, path_in_repo="tasks.csv", # Quick and dirty way to set the task as DONE path_or_fileobj=tasks.replace( f"{model_id},{dataset_id},PENDING", f"{model_id},{dataset_id},DONE" ).encode() ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-spaces.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-spaces/#task-scheduler
#task-scheduler
.md
34_5
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/
.md
35_0
Sharing your files and work is an important aspect of the Hub. The `huggingface_hub` offers several options for uploading your files to the Hub. You can use these functions independently or integrate them into your library, making it more convenient for your users to interact with the Hub. This guide will show you how to push files: - without using Git. - that are very large with [Git LFS](https://git-lfs.github.com/). - with the `commit` context manager. - with the [`~Repository.push_to_hub`] function. Whenever you want to upload files to the Hub, you need to log in to your Hugging Face account. For more details about authentication, check out [this section](../quick-start#authentication).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#upload-files-to-the-hub
#upload-files-to-the-hub
.md
35_1
Once you've created a repository with [`create_repo`], you can upload a file to your repository using [`upload_file`]. Specify the path of the file to upload, where you want to upload the file to in the repository, and the name of the repository you want to add the file to. Depending on your repository type, you can optionally set the repository type as a `dataset`, `model`, or `space`. ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> api.upload_file( ... path_or_fileobj="/path/to/local/folder/README.md", ... path_in_repo="README.md", ... repo_id="username/test-dataset", ... repo_type="dataset", ... ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#upload-a-file
#upload-a-file
.md
35_2
Use the [`upload_folder`] function to upload a local folder to an existing repository. Specify the path of the local folder to upload, where you want to upload the folder to in the repository, and the name of the repository you want to add the folder to. Depending on your repository type, you can optionally set the repository type as a `dataset`, `model`, or `space`. ```py >>> from huggingface_hub import HfApi >>> api = HfApi() # Upload all the content from the local folder to your remote Space. # By default, files are uploaded at the root of the repo >>> api.upload_folder( ... folder_path="/path/to/local/space", ... repo_id="username/my-cool-space", ... repo_type="space", ... ) ``` By default, the `.gitignore` file will be taken into account to know which files should be committed or not. By default we check if a `.gitignore` file is present in a commit, and if not, we check if it exists on the Hub. Please be aware that only a `.gitignore` file present at the root of the directory will be used. We do not check for `.gitignore` files in subdirectories. If you don't want to use an hardcoded `.gitignore` file, you can use the `allow_patterns` and `ignore_patterns` arguments to filter which files to upload. These parameters accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as documented [here](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm). If both `allow_patterns` and `ignore_patterns` are provided, both constraints apply. Beside the `.gitignore` file and allow/ignore patterns, any `.git/` folder present in any subdirectory will be ignored. ```py >>> api.upload_folder( ... folder_path="/path/to/local/folder", ... path_in_repo="my-dataset/train", # Upload to a specific folder ... repo_id="username/test-dataset", ... repo_type="dataset", ... ignore_patterns="**/logs/*.txt", # Ignore all text logs ... ) ``` You can also use the `delete_patterns` argument to specify files you want to delete from the repo in the same commit. This can prove useful if you want to clean a remote folder before pushing files in it and you don't know which files already exists. The example below uploads the local `./logs` folder to the remote `/experiment/logs/` folder. Only txt files are uploaded but before that, all previous logs on the repo on deleted. All of this in a single commit. ```py >>> api.upload_folder( ... folder_path="/path/to/local/folder/logs", ... repo_id="username/trained-model", ... path_in_repo="experiment/logs/", ... allow_patterns="*.txt", # Upload all local text files ... delete_patterns="*.txt", # Delete all remote text files before ... ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#upload-a-folder
#upload-a-folder
.md
35_3
You can use the `huggingface-cli upload` command from the terminal to directly upload files to the Hub. Internally it uses the same [`upload_file`] and [`upload_folder`] helpers described above. You can either upload a single file or an entire folder: ```bash # Usage: huggingface-cli upload [repo_id] [local_path] [path_in_repo] >>> huggingface-cli upload Wauplin/my-cool-model ./models/model.safetensors model.safetensors https://huggingface.co/Wauplin/my-cool-model/blob/main/model.safetensors >>> huggingface-cli upload Wauplin/my-cool-model ./models . https://huggingface.co/Wauplin/my-cool-model/tree/main ``` `local_path` and `path_in_repo` are optional and can be implicitly inferred. If `local_path` is not set, the tool will check if a local folder or file has the same name as the `repo_id`. If that's the case, its content will be uploaded. Otherwise, an exception is raised asking the user to explicitly set `local_path`. In any case, if `path_in_repo` is not set, files are uploaded at the root of the repo. For more details about the CLI upload command, please refer to the [CLI guide](./cli#huggingface-cli-upload).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#upload-from-the-cli
#upload-from-the-cli
.md
35_4
In most cases, the [`upload_folder`] method and `huggingface-cli upload` command should be the go-to solutions to upload files to the Hub. They ensure a single commit will be made, handle a lot of use cases, and fail explicitly when something wrong happens. However, when dealing with a large amount of data, you will usually prefer a resilient process even if it leads to more commits or requires more CPU usage. The [`upload_large_folder`] method has been implemented in that spirit: - it is resumable: the upload process is split into many small tasks (hashing files, pre-uploading them, and committing them). Each time a task is completed, the result is cached locally in a `./cache/huggingface` folder inside the folder you are trying to upload. By doing so, restarting the process after an interruption will resume all completed tasks. - it is multi-threaded: hashing large files and pre-uploading them benefits a lot from multithreading if your machine allows it. - it is resilient to errors: a high-level retry-mechanism has been added to retry each independent task indefinitely until it passes (no matter if it's a OSError, ConnectionError, PermissionError, etc.). This mechanism is double-edged. If transient errors happen, the process will continue and retry. If permanent errors happen (e.g. permission denied), it will retry indefinitely without solving the root cause. If you want more technical details about how `upload_large_folder` is implemented under the hood, please have a look to the [`upload_large_folder`] package reference. Here is how to use [`upload_large_folder`] in a script. The method signature is very similar to [`upload_folder`]: ```py >>> api.upload_large_folder( ... repo_id="HuggingFaceM4/Docmatix", ... repo_type="dataset", ... folder_path="/path/to/local/docmatix", ... ) ``` You will see the following output in your terminal: ``` Repo created: https://huggingface.co/datasets/HuggingFaceM4/Docmatix Found 5 candidate files to upload Recovering from metadata files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:00<00:00, 542.66it/s] ---------- 2024-07-22 17:23:17 (0:00:00) ---------- Files: hashed 5/5 (5.0G/5.0G) | pre-uploaded: 0/5 (0.0/5.0G) | committed: 0/5 (0.0/5.0G) | ignored: 0 Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 5 | committing: 0 | waiting: 11 --------------------------------------------------- ``` First, the repo is created if it didn't exist before. Then, the local folder is scanned for files to upload. For each file, we try to recover metadata information (from a previously interrupted upload). From there, it is able to launch workers and print an update status every 1 minute. Here, we can see that 5 files have already been hashed but not pre-uploaded. 5 workers are pre-uploading files while the 11 others are waiting for a task. A command line is also provided. You can define the number of workers and the level of verbosity in the terminal: ```sh huggingface-cli upload-large-folder HuggingFaceM4/Docmatix --repo-type=dataset /path/to/local/docmatix --num-workers=16 ``` <Tip> For large uploads, you have to set `repo_type="model"` or `--repo-type=model` explicitly. Usually, this information is implicit in all other `HfApi` methods. This is to avoid having data uploaded to a repository with a wrong type. If that's the case, you'll have to re-upload everything. </Tip> <Tip warning={true}> While being much more robust to upload large folders, `upload_large_folder` is more limited than [`upload_folder`] feature-wise. In practice: - you cannot set a custom `path_in_repo`. If you want to upload to a subfolder, you need to set the proper structure locally. - you cannot set a custom `commit_message` and `commit_description` since multiple commits are created. - you cannot delete from the repo while uploading. Please make a separate commit first. - you cannot create a PR directly. Please create a PR first (from the UI or using [`create_pull_request`]) and then commit to it by passing `revision`. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#upload-a-large-folder
#upload-a-large-folder
.md
35_5
There are some limitations to be aware of when dealing with a large amount of data in your repo. Given the time it takes to stream the data, getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying. Check out our [Repository limitations and recommendations](https://huggingface.co/docs/hub/repositories-recommendations) guide for best practices on how to structure your repositories on the Hub. Let's move on with some practical tips to make your upload process as smooth as possible. - **Start small**: We recommend starting with a small amount of data to test your upload script. It's easier to iterate on a script when failing takes only a little time. - **Expect failures**: Streaming large amounts of data is challenging. You don't know what can happen, but it's always best to consider that something will fail at least once -no matter if it's due to your machine, your connection, or our servers. For example, if you plan to upload a large number of files, it's best to keep track locally of which files you already uploaded before uploading the next batch. You are ensured that an LFS file that is already committed will never be re-uploaded twice but checking it client-side can still save some time. This is what [`upload_large_folder`] does for you. - **Use `hf_transfer`**: this is a Rust-based [library](https://github.com/huggingface/hf_transfer) meant to speed up uploads on machines with very high bandwidth. To use `hf_transfer`: 1. Specify the `hf_transfer` extra when installing `huggingface_hub` (i.e., `pip install huggingface_hub[hf_transfer]`). 2. Set `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable. <Tip warning={true}> `hf_transfer` is a power user tool! It is tested and production-ready, but it lacks user-friendly features like advanced error handling or proxies. For more details, please take a look at this [section](https://huggingface.co/docs/huggingface_hub/hf_transfer). </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#tips-and-tricks-for-large-uploads
#tips-and-tricks-for-large-uploads
.md
35_6
In most cases, you won't need more than [`upload_file`] and [`upload_folder`] to upload your files to the Hub. However, `huggingface_hub` has more advanced features to make things easier. Let's have a look at them!
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#advanced-features
#advanced-features
.md
35_7
In some cases, you want to push data without blocking your main thread. This is particularly useful to upload logs and artifacts while continuing a training. To do so, you can use the `run_as_future` argument in both [`upload_file`] and [`upload_folder`]. This will return a [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html#future-objects) object that you can use to check the status of the upload. ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> future = api.upload_folder( # Upload in the background (non-blocking action) ... repo_id="username/my-model", ... folder_path="checkpoints-001", ... run_as_future=True, ... ) >>> future Future(...) >>> future.done() False >>> future.result() # Wait for the upload to complete (blocking action) ... ``` <Tip> Background jobs are queued when using `run_as_future=True`. This means that you are guaranteed that the jobs will be executed in the correct order. </Tip> Even though background jobs are mostly useful to upload data/create commits, you can queue any method you like using [`run_as_future`]. For instance, you can use it to create a repo and then upload data to it in the background. The built-in `run_as_future` argument in upload methods is just an alias around it. ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> api.run_as_future(api.create_repo, "username/my-model", exists_ok=True) Future(...) >>> api.upload_file( ... repo_id="username/my-model", ... path_in_repo="file.txt", ... path_or_fileobj=b"file content", ... run_as_future=True, ... ) Future(...) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#non-blocking-uploads
#non-blocking-uploads
.md
35_8
[`upload_folder`] makes it easy to upload an entire folder to the Hub. However, for large folders (thousands of files or hundreds of GB), we recommend using [`upload_large_folder`], which splits the upload into multiple commits. See the [Upload a large folder](#upload-a-large-folder) section for more details.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#upload-a-folder-by-chunks
#upload-a-folder-by-chunks
.md
35_9
The Hugging Face Hub makes it easy to save and version data. However, there are some limitations when updating the same file thousands of times. For instance, you might want to save logs of a training process or user feedback on a deployed Space. In these cases, uploading the data as a dataset on the Hub makes sense, but it can be hard to do properly. The main reason is that you don't want to version every update of your data because it'll make the git repository unusable. The [`CommitScheduler`] class offers a solution to this problem. The idea is to run a background job that regularly pushes a local folder to the Hub. Let's assume you have a Gradio Space that takes as input some text and generates two translations of it. Then, the user can select their preferred translation. For each run, you want to save the input, output, and user preference to analyze the results. This is a perfect use case for [`CommitScheduler`]; you want to save data to the Hub (potentially millions of user feedback), but you don't _need_ to save in real-time each user's input. Instead, you can save the data locally in a JSON file and upload it every 10 minutes. For example: ```py >>> import json >>> import uuid >>> from pathlib import Path >>> import gradio as gr >>> from huggingface_hub import CommitScheduler # Define the file where to save the data. Use UUID to make sure not to overwrite existing data from a previous run. >>> feedback_file = Path("user_feedback/") / f"data_{uuid.uuid4()}.json" >>> feedback_folder = feedback_file.parent # Schedule regular uploads. Remote repo and local folder are created if they don't already exist. >>> scheduler = CommitScheduler( ... repo_id="report-translation-feedback", ... repo_type="dataset", ... folder_path=feedback_folder, ... path_in_repo="data", ... every=10, ... ) # Define the function that will be called when the user submits its feedback (to be called in Gradio) >>> def save_feedback(input_text:str, output_1: str, output_2:str, user_choice: int) -> None: ... """ ... Append input/outputs and user feedback to a JSON Lines file using a thread lock to avoid concurrent writes from different users. ... """ ... with scheduler.lock: ... with feedback_file.open("a") as f: ... f.write(json.dumps({"input": input_text, "output_1": output_1, "output_2": output_2, "user_choice": user_choice})) ... f.write("\n") # Start Gradio >>> with gr.Blocks() as demo: >>> ... # define Gradio demo + use `save_feedback` >>> demo.launch() ``` And that's it! User input/outputs and feedback will be available as a dataset on the Hub. By using a unique JSON file name, you are guaranteed you won't overwrite data from a previous run or data from another Spaces/replicas pushing concurrently to the same repository. For more details about the [`CommitScheduler`], here is what you need to know: - **append-only:** It is assumed that you will only add content to the folder. You must only append data to existing files or create new files. Deleting or overwriting a file might corrupt your repository. - **git history**: The scheduler will commit the folder every `every` minutes. To avoid polluting the git repository too much, it is recommended to set a minimal value of 5 minutes. Besides, the scheduler is designed to avoid empty commits. If no new content is detected in the folder, the scheduled commit is dropped. - **errors:** The scheduler run as background thread. It is started when you instantiate the class and never stops. In particular, if an error occurs during the upload (example: connection issue), the scheduler will silently ignore it and retry at the next scheduled commit. - **thread-safety:** In most cases it is safe to assume that you can write to a file without having to worry about a lock file. The scheduler will not crash or be corrupted if you write content to the folder while it's uploading. In practice, _it is possible_ that concurrency issues happen for heavy-loaded apps. In this case, we advice to use the `scheduler.lock` lock to ensure thread-safety. The lock is blocked only when the scheduler scans the folder for changes, not when it uploads data. You can safely assume that it will not affect the user experience on your Space.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#scheduled-uploads
#scheduled-uploads
.md
35_10
Persisting data from a Space to a Dataset on the Hub is the main use case for [`CommitScheduler`]. Depending on the use case, you might want to structure your data differently. The structure has to be robust to concurrent users and restarts which often implies generating UUIDs. Besides robustness, you should upload data in a format readable by the πŸ€— Datasets library for later reuse. We created a [Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver) that demonstrates how to save several different data formats (you may need to adapt it for your own specific needs).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#space-persistence-demo
#space-persistence-demo
.md
35_11
[`CommitScheduler`] assumes your data is append-only and should be uploading "as is". However, you might want to customize the way data is uploaded. You can do that by creating a class inheriting from [`CommitScheduler`] and overwrite the `push_to_hub` method (feel free to overwrite it any way you want). You are guaranteed it will be called every `every` minutes in a background thread. You don't have to worry about concurrency and errors but you must be careful about other aspects, such as pushing empty commits or duplicated data. In the (simplified) example below, we overwrite `push_to_hub` to zip all PNG files in a single archive to avoid overloading the repo on the Hub: ```py class ZipScheduler(CommitScheduler): def push_to_hub(self): # 1. List PNG files png_files = list(self.folder_path.glob("*.png")) if len(png_files) == 0: return None # return early if nothing to commit # 2. Zip png files in a single archive with tempfile.TemporaryDirectory() as tmpdir: archive_path = Path(tmpdir) / "train.zip" with zipfile.ZipFile(archive_path, "w", zipfile.ZIP_DEFLATED) as zip: for png_file in png_files: zip.write(filename=png_file, arcname=png_file.name) # 3. Upload archive self.api.upload_file(..., path_or_fileobj=archive_path) # 4. Delete local png files to avoid re-uploading them later for png_file in png_files: png_file.unlink() ``` When you overwrite `push_to_hub`, you have access to the attributes of [`CommitScheduler`] and especially: - [`HfApi`] client: `api` - Folder parameters: `folder_path` and `path_in_repo` - Repo parameters: `repo_id`, `repo_type`, `revision` - The thread lock: `lock` <Tip> For more examples of custom schedulers, check out our [demo Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver) containing different implementations depending on your use cases. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#custom-uploads
#custom-uploads
.md
35_12
The [`upload_file`] and [`upload_folder`] functions are high-level APIs that are generally convenient to use. We recommend trying these functions first if you don't need to work at a lower level. However, if you want to work at a commit-level, you can use the [`create_commit`] function directly. There are three types of operations supported by [`create_commit`]: - [`CommitOperationAdd`] uploads a file to the Hub. If the file already exists, the file contents are overwritten. This operation accepts two arguments: - `path_in_repo`: the repository path to upload a file to. - `path_or_fileobj`: either a path to a file on your filesystem or a file-like object. This is the content of the file to upload to the Hub. - [`CommitOperationDelete`] removes a file or a folder from a repository. This operation accepts `path_in_repo` as an argument. - [`CommitOperationCopy`] copies a file within a repository. This operation accepts three arguments: - `src_path_in_repo`: the repository path of the file to copy. - `path_in_repo`: the repository path where the file should be copied. - `src_revision`: optional - the revision of the file to copy if your want to copy a file from a different branch/revision. For example, if you want to upload two files and delete a file in a Hub repository: 1. Use the appropriate `CommitOperation` to add or delete a file and to delete a folder: ```py >>> from huggingface_hub import HfApi, CommitOperationAdd, CommitOperationDelete >>> api = HfApi() >>> operations = [ ... CommitOperationAdd(path_in_repo="LICENSE.md", path_or_fileobj="~/repo/LICENSE.md"), ... CommitOperationAdd(path_in_repo="weights.h5", path_or_fileobj="~/repo/weights-final.h5"), ... CommitOperationDelete(path_in_repo="old-weights.h5"), ... CommitOperationDelete(path_in_repo="logs/"), ... CommitOperationCopy(src_path_in_repo="image.png", path_in_repo="duplicate_image.png"), ... ] ``` 2. Pass your operations to [`create_commit`]: ```py >>> api.create_commit( ... repo_id="lysandre/test-model", ... operations=operations, ... commit_message="Upload my model weights and license", ... ) ``` In addition to [`upload_file`] and [`upload_folder`], the following functions also use [`create_commit`] under the hood: - [`delete_file`] deletes a single file from a repository on the Hub. - [`delete_folder`] deletes an entire folder from a repository on the Hub. - [`metadata_update`] updates a repository's metadata. For more detailed information, take a look at the [`HfApi`] reference.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#createcommit
#createcommit
.md
35_13
In some cases, you might want to upload huge files to S3 **before** making the commit call. For example, if you are committing a dataset in several shards that are generated in-memory, you would need to upload the shards one by one to avoid an out-of-memory issue. A solution is to upload each shard as a separate commit on the repo. While being perfectly valid, this solution has the drawback of potentially messing the git history by generating tens of commits. To overcome this issue, you can upload your files one by one to S3 and then create a single commit at the end. This is possible using [`preupload_lfs_files`] in combination with [`create_commit`]. <Tip warning={true}> This is a power-user method. Directly using [`upload_file`], [`upload_folder`] or [`create_commit`] instead of handling the low-level logic of pre-uploading files is the way to go in the vast majority of cases. The main caveat of [`preupload_lfs_files`] is that until the commit is actually made, the upload files are not accessible on the repo on the Hub. If you have a question, feel free to ping us on our Discord or in a GitHub issue. </Tip> Here is a simple example illustrating how to pre-upload files: ```py >>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo >>> repo_id = create_repo("test_preupload").repo_id >>> operations = [] # List of all `CommitOperationAdd` objects that will be generated >>> for i in range(5): ... content = ... # generate binary content ... addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content) ... preupload_lfs_files(repo_id, additions=[addition]) ... operations.append(addition) >>> # Create commit >>> create_commit(repo_id, operations=operations, commit_message="Commit all shards") ``` First, we create the [`CommitOperationAdd`] objects one by one. In a real-world example, those would contain the generated shards. Each file is uploaded before generating the next one. During the [`preupload_lfs_files`] step, **the `CommitOperationAdd` object is mutated**. You should only use it to pass it directly to [`create_commit`]. The main update of the object is that **the binary content is removed** from it, meaning that it will be garbage-collected if you don't store another reference to it. This is expected as we don't want to keep in memory the content that is already uploaded. Finally we create the commit by passing all the operations to [`create_commit`]. You can pass additional operations (add, delete or copy) that have not been processed yet and they will be handled correctly.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#preupload-lfs-files-before-commit
#preupload-lfs-files-before-commit
.md
35_14
All the methods described above use the Hub's API to upload files. This is the recommended way to upload files to the Hub. However, we also provide [`Repository`], a wrapper around the git tool to manage a local repository. <Tip warning={true}> Although [`Repository`] is not formally deprecated, we recommend using the HTTP-based methods described above instead. For more details about this recommendation, please have a look at [this guide](../concepts/git_vs_http) explaining the core differences between HTTP-based and Git-based approaches. </Tip> Git LFS automatically handles files larger than 10MB. But for very large files (>5GB), you need to install a custom transfer agent for Git LFS: ```bash huggingface-cli lfs-enable-largefiles ``` You should install this for each repository that has a very large file. Once installed, you'll be able to push files larger than 5GB.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#legacy-upload-files-with-git-lfs
#legacy-upload-files-with-git-lfs
.md
35_15
The `commit` context manager handles four of the most common Git commands: pull, add, commit, and push. `git-lfs` automatically tracks any file larger than 10MB. In the following example, the `commit` context manager: 1. Pulls from the `text-files` repository. 2. Adds a change made to `file.txt`. 3. Commits the change. 4. Pushes the change to the `text-files` repository. ```python >>> from huggingface_hub import Repository >>> with Repository(local_dir="text-files", clone_from="<user>/text-files").commit(commit_message="My first file :)"): ... with open("file.txt", "w+") as f: ... f.write(json.dumps({"hey": 8})) ``` Here is another example of how to use the `commit` context manager to save and upload a file to a repository: ```python >>> import torch >>> model = torch.nn.Transformer() >>> with Repository("torch-model", clone_from="<user>/torch-model", token=True).commit(commit_message="My cool model :)"): ... torch.save(model.state_dict(), "model.pt") ``` Set `blocking=False` if you would like to push your commits asynchronously. Non-blocking behavior is helpful when you want to continue running your script while your commits are being pushed. ```python >>> with repo.commit(commit_message="My cool model :)", blocking=False) ``` You can check the status of your push with the `command_queue` method: ```python >>> last_command = repo.command_queue[-1] >>> last_command.status ``` Refer to the table below for the possible statuses: | Status | Description | | -------- | ------------------------------------ | | -1 | The push is ongoing. | | 0 | The push has completed successfully. | | Non-zero | An error has occurred. | When `blocking=False`, commands are tracked, and your script will only exit when all pushes are completed, even if other errors occur in your script. Some additional useful commands for checking the status of a push include: ```python # Inspect an error. >>> last_command.stderr # Check whether a push is completed or ongoing. >>> last_command.is_done # Check whether a push command has errored. >>> last_command.failed ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#commit-context-manager
#commit-context-manager
.md
35_16
The [`Repository`] class has a [`~Repository.push_to_hub`] function to add files, make a commit, and push them to a repository. Unlike the `commit` context manager, you'll need to pull from a repository first before calling [`~Repository.push_to_hub`]. For example, if you've already cloned a repository from the Hub, then you can initialize the `repo` from the local directory: ```python >>> from huggingface_hub import Repository >>> repo = Repository(local_dir="path/to/local/repo") ``` Update your local clone with [`~Repository.git_pull`] and then push your file to the Hub: ```py >>> repo.git_pull() >>> repo.push_to_hub(commit_message="Commit my-awesome-file to the Hub") ``` However, if you aren't ready to push a file yet, you can use [`~Repository.git_add`] and [`~Repository.git_commit`] to only add and commit your file: ```py >>> repo.git_add("path/to/file") >>> repo.git_commit(commit_message="add my first model config file :)") ``` When you're ready, push the file to your repository with [`~Repository.git_push`]: ```py >>> repo.git_push() ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md
https://huggingface.co/docs/huggingface_hub/en/guides/upload/#pushtohub
#pushtohub
.md
35_17
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/
.md
36_0
The `huggingface_hub` library provides a Python interface to create, share, and update Model Cards. Visit [the dedicated documentation page](https://huggingface.co/docs/hub/models-cards) for a deeper view of what Model Cards on the Hub are, and how they work under the hood. <Tip> [New (beta)! Try our experimental Model Card Creator App](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool) </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#create-and-share-model-cards
#create-and-share-model-cards
.md
36_1
To load an existing card from the Hub, you can use the [`ModelCard.load`] function. Here, we'll load the card from [`nateraw/vit-base-beans`](https://huggingface.co/nateraw/vit-base-beans). ```python from huggingface_hub import ModelCard card = ModelCard.load('nateraw/vit-base-beans') ``` This card has some helpful attributes that you may want to access/leverage: - `card.data`: Returns a [`ModelCardData`] instance with the model card's metadata. Call `.to_dict()` on this instance to get the representation as a dictionary. - `card.text`: Returns the text of the card, *excluding the metadata header*. - `card.content`: Returns the text content of the card, *including the metadata header*.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#load-a-model-card-from-the-hub
#load-a-model-card-from-the-hub
.md
36_2
To initialize a Model Card from text, just pass the text content of the card to the `ModelCard` on init. ```python content = """ --- language: en license: mit --- # My Model Card """ card = ModelCard(content) card.data.to_dict() == {'language': 'en', 'license': 'mit'} # True ``` Another way you might want to do this is with f-strings. In the following example, we: - Use [`ModelCardData.to_yaml`] to convert metadata we defined to YAML so we can use it to insert the YAML block in the model card. - Show how you might use a template variable via Python f-strings. ```python card_data = ModelCardData(language='en', license='mit', library='timm') example_template_var = 'nateraw' content = f""" --- { card_data.to_yaml() } --- # My Model Card This model created by [@{example_template_var}](https://github.com/{example_template_var}) """ card = ModelCard(content) print(card) ``` The above example would leave us with a card that looks like this: ``` --- language: en license: mit library: timm --- # My Model Card This model created by [@nateraw](https://github.com/nateraw) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#from-text
#from-text
.md
36_3
If you have `Jinja2` installed, you can create Model Cards from a jinja template file. Let's see a basic example: ```python from pathlib import Path from huggingface_hub import ModelCard, ModelCardData # Define your jinja template template_text = """ --- {{ card_data }} --- # Model Card for MyCoolModel This model does this and that. This model was created by [@{{ author }}](https://hf.co/{{author}}). """.strip() # Write the template to a file Path('custom_template.md').write_text(template_text) # Define card metadata card_data = ModelCardData(language='en', license='mit', library_name='keras') # Create card from template, passing it any jinja template variables you want. # In our case, we'll pass author card = ModelCard.from_template(card_data, template_path='custom_template.md', author='nateraw') card.save('my_model_card_1.md') print(card) ``` The resulting card's markdown looks like this: ``` --- language: en license: mit library_name: keras --- # Model Card for MyCoolModel This model does this and that. This model was created by [@nateraw](https://hf.co/nateraw). ``` If you update any card.data, it'll reflect in the card itself. ``` card.data.library_name = 'timm' card.data.language = 'fr' card.data.license = 'apache-2.0' print(card) ``` Now, as you can see, the metadata header has been updated: ``` --- language: fr license: apache-2.0 library_name: timm --- # Model Card for MyCoolModel This model does this and that. This model was created by [@nateraw](https://hf.co/nateraw). ``` As you update the card data, you can validate the card is still valid against the Hub by calling [`ModelCard.validate`]. This ensures that the card passes any validation rules set up on the Hugging Face Hub.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#from-a-jinja-template
#from-a-jinja-template
.md
36_4
Instead of using your own template, you can also use the [default template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md), which is a fully featured model card with tons of sections you may want to fill out. Under the hood, it uses [Jinja2](https://jinja.palletsprojects.com/en/3.1.x/) to fill out a template file. <Tip> Note that you will have to have Jinja2 installed to use `from_template`. You can do so with `pip install Jinja2`. </Tip> ```python card_data = ModelCardData(language='en', license='mit', library_name='keras') card = ModelCard.from_template( card_data, model_id='my-cool-model', model_description="this model does this and that", developers="Nate Raw", repo="https://github.com/huggingface/huggingface_hub", ) card.save('my_model_card_2.md') print(card) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#from-the-default-template
#from-the-default-template
.md
36_5
If you're authenticated with the Hugging Face Hub (either by using `huggingface-cli login` or [`login`]), you can push cards to the Hub by simply calling [`ModelCard.push_to_hub`]. Let's take a look at how to do that... First, we'll create a new repo called 'hf-hub-modelcards-pr-test' under the authenticated user's namespace: ```python from huggingface_hub import whoami, create_repo user = whoami()['name'] repo_id = f'{user}/hf-hub-modelcards-pr-test' url = create_repo(repo_id, exist_ok=True) ``` Then, we'll create a card from the default template (same as the one defined in the section above): ```python card_data = ModelCardData(language='en', license='mit', library_name='keras') card = ModelCard.from_template( card_data, model_id='my-cool-model', model_description="this model does this and that", developers="Nate Raw", repo="https://github.com/huggingface/huggingface_hub", ) ``` Finally, we'll push that up to the hub ```python card.push_to_hub(repo_id) ``` You can check out the resulting card [here](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/blob/main/README.md). If you instead wanted to push a card as a pull request, you can just say `create_pr=True` when calling `push_to_hub`: ```python card.push_to_hub(repo_id, create_pr=True) ``` A resulting PR created from this command can be seen [here](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/discussions/3).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#share-model-cards
#share-model-cards
.md
36_6
In this section we will see what metadata are in repo cards and how to update them. `metadata` refers to a hash map (or key value) context that provides some high-level information about a model, dataset or Space. That information can include details such as the model's `pipeline type`, `model_id` or `model_description`. For more detail you can take a look to these guides: [Model Card](https://huggingface.co/docs/hub/model-cards#model-card-metadata), [Dataset Card](https://huggingface.co/docs/hub/datasets-cards#dataset-card-metadata) and [Spaces Settings](https://huggingface.co/docs/hub/spaces-settings#spaces-settings). Now lets see some examples on how to update those metadata. Let's start with a first example: ```python >>> from huggingface_hub import metadata_update >>> metadata_update("username/my-cool-model", {"pipeline_tag": "image-classification"}) ``` With these two lines of code you will update the metadata to set a new `pipeline_tag`. By default, you cannot update a key that is already existing on the card. If you want to do so, you must pass `overwrite=True` explicitly: ```python >>> from huggingface_hub import metadata_update >>> metadata_update("username/my-cool-model", {"pipeline_tag": "text-generation"}, overwrite=True) ``` It often happen that you want to suggest some changes to a repository on which you don't have write permission. You can do that by creating a PR on that repo which will allow the owners to review and merge your suggestions. ```python >>> from huggingface_hub import metadata_update >>> metadata_update("someone/model", {"pipeline_tag": "text-classification"}, create_pr=True) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#update-metadata
#update-metadata
.md
36_7
To include evaluation results in the metadata `model-index`, you can pass an [`EvalResult`] or a list of `EvalResult` with your associated evaluation results. Under the hood it'll create the `model-index` when you call `card.data.to_dict()`. For more information on how this works, you can check out [this section of the Hub docs](https://huggingface.co/docs/hub/models-cards#evaluation-results). <Tip> Note that using this function requires you to include the `model_name` attribute in [`ModelCardData`]. </Tip> ```python card_data = ModelCardData( language='en', license='mit', model_name='my-cool-model', eval_results = EvalResult( task_type='image-classification', dataset_type='beans', dataset_name='Beans', metric_type='accuracy', metric_value=0.7 ) ) card = ModelCard.from_template(card_data) print(card.data) ``` The resulting `card.data` should look like this: ``` language: en license: mit model-index: - name: my-cool-model results: - task: type: image-classification dataset: name: Beans type: beans metrics: - type: accuracy value: 0.7 ``` If you have more than one evaluation result you'd like to share, just pass a list of `EvalResult`: ```python card_data = ModelCardData( language='en', license='mit', model_name='my-cool-model', eval_results = [ EvalResult( task_type='image-classification', dataset_type='beans', dataset_name='Beans', metric_type='accuracy', metric_value=0.7 ), EvalResult( task_type='image-classification', dataset_type='beans', dataset_name='Beans', metric_type='f1', metric_value=0.65 ) ] ) card = ModelCard.from_template(card_data) card.data ``` Which should leave you with the following `card.data`: ``` language: en license: mit model-index: - name: my-cool-model results: - task: type: image-classification dataset: name: Beans type: beans metrics: - type: accuracy value: 0.7 - type: f1 value: 0.65 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md
https://huggingface.co/docs/huggingface_hub/en/guides/model-cards/#include-evaluation-results
#include-evaluation-results
.md
36_8
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/
.md
37_0
The `huggingface_hub` library provides functions to download files from the repositories stored on the Hub. You can use these functions independently or integrate them into your own library, making it more convenient for your users to interact with the Hub. This guide will show you how to: * Download and cache a single file. * Download and cache an entire repository. * Download files to a local folder.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#download-files-from-the-hub
#download-files-from-the-hub
.md
37_1
The [`hf_hub_download`] function is the main function for downloading files from the Hub. It downloads the remote file, caches it on disk (in a version-aware way), and returns its local file path. <Tip> The returned filepath is a pointer to the HF local cache. Therefore, it is important to not modify the file to avoid having a corrupted cache. If you are interested in getting to know more about how files are cached, please refer to our [caching guide](./manage-cache). </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#download-a-single-file
#download-a-single-file
.md
37_2
Select the file to download using the `repo_id`, `repo_type` and `filename` parameters. By default, the file will be considered as being part of a `model` repo. ```python >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json") '/root/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade/config.json' # Download from a dataset >>> hf_hub_download(repo_id="google/fleurs", filename="fleurs.py", repo_type="dataset") '/root/.cache/huggingface/hub/datasets--google--fleurs/snapshots/199e4ae37915137c555b1765c01477c216287d34/fleurs.py' ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#from-latest-version
#from-latest-version
.md
37_3
By default, the latest version from the `main` branch is downloaded. However, in some cases you want to download a file at a particular version (e.g. from a specific branch, a PR, a tag or a commit hash). To do so, use the `revision` parameter: ```python # Download from the `v1.0` tag >>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="v1.0") # Download from the `test-branch` branch >>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="test-branch") # Download from Pull Request #3 >>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="refs/pr/3") # Download from a specific commit hash >>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="877b84a8f93f2d619faa2a6e514a32beef88ab0a") ``` **Note:** When using the commit hash, it must be the full-length hash instead of a 7-character commit hash.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#from-specific-version
#from-specific-version
.md
37_4
In case you want to construct the URL used to download a file from a repo, you can use [`hf_hub_url`] which returns a URL. Note that it is used internally by [`hf_hub_download`].
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#construct-a-download-url
#construct-a-download-url
.md
37_5
[`snapshot_download`] downloads an entire repository at a given revision. It uses internally [`hf_hub_download`] which means all downloaded files are also cached on your local disk. Downloads are made concurrently to speed-up the process. To download a whole repository, just pass the `repo_id` and `repo_type`: ```python >>> from huggingface_hub import snapshot_download >>> snapshot_download(repo_id="lysandre/arxiv-nlp") '/home/lysandre/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade' # Or from a dataset >>> snapshot_download(repo_id="google/fleurs", repo_type="dataset") '/home/lysandre/.cache/huggingface/hub/datasets--google--fleurs/snapshots/199e4ae37915137c555b1765c01477c216287d34' ``` [`snapshot_download`] downloads the latest revision by default. If you want a specific repository revision, use the `revision` parameter: ```python >>> from huggingface_hub import snapshot_download >>> snapshot_download(repo_id="lysandre/arxiv-nlp", revision="refs/pr/1") ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#download-an-entire-repository
#download-an-entire-repository
.md
37_6
[`snapshot_download`] provides an easy way to download a repository. However, you don't always want to download the entire content of a repository. For example, you might want to prevent downloading all `.bin` files if you know you'll only use the `.safetensors` weights. You can do that using `allow_patterns` and `ignore_patterns` parameters. These parameters accept either a single pattern or a list of patterns. Patterns are Standard Wildcards (globbing patterns) as documented [here](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm). The pattern matching is based on [`fnmatch`](https://docs.python.org/3/library/fnmatch.html). For example, you can use `allow_patterns` to only download JSON configuration files: ```python >>> from huggingface_hub import snapshot_download >>> snapshot_download(repo_id="lysandre/arxiv-nlp", allow_patterns="*.json") ``` On the other hand, `ignore_patterns` can exclude certain files from being downloaded. The following example ignores the `.msgpack` and `.h5` file extensions: ```python >>> from huggingface_hub import snapshot_download >>> snapshot_download(repo_id="lysandre/arxiv-nlp", ignore_patterns=["*.msgpack", "*.h5"]) ``` Finally, you can combine both to precisely filter your download. Here is an example to download all json and markdown files except `vocab.json`. ```python >>> from huggingface_hub import snapshot_download >>> snapshot_download(repo_id="gpt2", allow_patterns=["*.md", "*.json"], ignore_patterns="vocab.json") ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#filter-files-to-download
#filter-files-to-download
.md
37_7
By default, we recommend using the [cache system](./manage-cache) to download files from the Hub. You can specify a custom cache location using the `cache_dir` parameter in [`hf_hub_download`] and [`snapshot_download`], or by setting the [`HF_HOME`](../package_reference/environment_variables#hf_home) environment variable. However, if you need to download files to a specific folder, you can pass a `local_dir` parameter to the download function. This is useful to get a workflow closer to what the `git` command offers. The downloaded files will maintain their original file structure within the specified folder. For example, if `filename="data/train.csv"` and `local_dir="path/to/folder"`, the resulting filepath will be `"path/to/folder/data/train.csv"`. A `.cache/huggingface/` folder is created at the root of your local directory containing metadata about the downloaded files. This prevents re-downloading files if they're already up-to-date. If the metadata has changed, then the new file version is downloaded. This makes the `local_dir` optimized for pulling only the latest changes. After completing the download, you can safely remove the `.cache/huggingface/` folder if you no longer need it. However, be aware that re-running your script without this folder may result in longer recovery times, as metadata will be lost. Rest assured that your local data will remain intact and unaffected. <Tip> Don't worry about the `.cache/huggingface/` folder when committing changes to the Hub! This folder is automatically ignored by both `git` and [`upload_folder`]. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#download-files-to-a-local-folder
#download-files-to-a-local-folder
.md
37_8
You can use the `huggingface-cli download` command from the terminal to directly download files from the Hub. Internally, it uses the same [`hf_hub_download`] and [`snapshot_download`] helpers described above and prints the returned path to the terminal. ```bash >>> huggingface-cli download gpt2 config.json /home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json ``` You can download multiple files at once which displays a progress bar and returns the snapshot path in which the files are located: ```bash >>> huggingface-cli download gpt2 config.json model.safetensors Fetching 2 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 23831.27it/s] /home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10 ``` For more details about the CLI download command, please refer to the [CLI guide](./cli#huggingface-cli-download).
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#download-from-the-cli
#download-from-the-cli
.md
37_9
If you are running on a machine with high bandwidth, you can increase your download speed with [`hf_transfer`](https://github.com/huggingface/hf_transfer), a Rust-based library developed to speed up file transfers with the Hub. To enable it: 1. Specify the `hf_transfer` extra when installing `huggingface_hub` (e.g. `pip install huggingface_hub[hf_transfer]`). 2. Set `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable. <Tip warning={true}> `hf_transfer` is a power user tool! It is tested and production-ready, but it lacks user-friendly features like advanced error handling or proxies. For more details, please take a look at this [section](https://huggingface.co/docs/huggingface_hub/hf_transfer). </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/download.md
https://huggingface.co/docs/huggingface_hub/en/guides/download/#faster-downloads
#faster-downloads
.md
37_10
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/
.md
38_0
The Hugging Face Hub cache-system is designed to be the central cache shared across libraries that depend on the Hub. It has been updated in v0.8.0 to prevent re-downloading same files between revisions. The caching system is designed as follows: ``` <CACHE_DIR> β”œβ”€ <MODELS> β”œβ”€ <DATASETS> β”œβ”€ <SPACES> ``` The `<CACHE_DIR>` is usually your user's home directory. However, it is customizable with the `cache_dir` argument on all methods, or by specifying either `HF_HOME` or `HF_HUB_CACHE` environment variable. Models, datasets and spaces share a common root. Each of these repositories contains the repository type, the namespace (organization or username) if it exists and the repository name: ``` <CACHE_DIR> β”œβ”€ models--julien-c--EsperBERTo-small β”œβ”€ models--lysandrejik--arxiv-nlp β”œβ”€ models--bert-base-cased β”œβ”€ datasets--glue β”œβ”€ datasets--huggingface--DataMeasurementsFiles β”œβ”€ spaces--dalle-mini--dalle-mini ``` It is within these folders that all files will now be downloaded from the Hub. Caching ensures that a file isn't downloaded twice if it already exists and wasn't updated; but if it was updated, and you're asking for the latest file, then it will download the latest file (while keeping the previous file intact in case you need it again). In order to achieve this, all folders contain the same skeleton: ``` <CACHE_DIR> β”œβ”€ datasets--glue β”‚ β”œβ”€ refs β”‚ β”œβ”€ blobs β”‚ β”œβ”€ snapshots ... ``` Each folder is designed to contain the following:
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#understand-caching
#understand-caching
.md
38_1
The `refs` folder contains files which indicates the latest revision of the given reference. For example, if we have previously fetched a file from the `main` branch of a repository, the `refs` folder will contain a file named `main`, which will itself contain the commit identifier of the current head. If the latest commit of `main` has `aaaaaa` as identifier, then it will contain `aaaaaa`. If that same branch gets updated with a new commit, that has `bbbbbb` as an identifier, then re-downloading a file from that reference will update the `refs/main` file to contain `bbbbbb`.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#refs
#refs
.md
38_2
The `blobs` folder contains the actual files that we have downloaded. The name of each file is their hash.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#blobs
#blobs
.md
38_3
The `snapshots` folder contains symlinks to the blobs mentioned above. It is itself made up of several folders: one per known revision! In the explanation above, we had initially fetched a file from the `aaaaaa` revision, before fetching a file from the `bbbbbb` revision. In this situation, we would now have two folders in the `snapshots` folder: `aaaaaa` and `bbbbbb`. In each of these folders, live symlinks that have the names of the files that we have downloaded. For example, if we had downloaded the `README.md` file at revision `aaaaaa`, we would have the following path: ``` <CACHE_DIR>/<REPO_NAME>/snapshots/aaaaaa/README.md ``` That `README.md` file is actually a symlink linking to the blob that has the hash of the file. By creating the skeleton this way we open the mechanism to file sharing: if the same file was fetched in revision `bbbbbb`, it would have the same hash and the file would not need to be re-downloaded.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#snapshots
#snapshots
.md
38_4
In addition to the `blobs`, `refs` and `snapshots` folders, you might also find a `.no_exist` folder in your cache. This folder keeps track of files that you've tried to download once but don't exist on the Hub. Its structure is the same as the `snapshots` folder with 1 subfolder per known revision: ``` <CACHE_DIR>/<REPO_NAME>/.no_exist/aaaaaa/config_that_does_not_exist.json ``` Unlike the `snapshots` folder, files are simple empty files (no symlinks). In this example, the file `"config_that_does_not_exist.json"` does not exist on the Hub for the revision `"aaaaaa"`. As it only stores empty files, this folder is neglectable in term of disk usage. So now you might wonder, why is this information even relevant? In some cases, a framework tries to load optional files for a model. Saving the non-existence of optional files makes it faster to load a model as it saves 1 HTTP call per possible optional file. This is for example the case in `transformers` where each tokenizer can support additional files. The first time you load the tokenizer on your machine, it will cache which optional files exist (and which doesn't) to make the loading time faster for the next initializations. To test if a file is cached locally (without making any HTTP request), you can use the [`try_to_load_from_cache`] helper. It will either return the filepath (if exists and cached), the object `_CACHED_NO_EXIST` (if non-existence is cached) or `None` (if we don't know). ```python from huggingface_hub import try_to_load_from_cache, _CACHED_NO_EXIST filepath = try_to_load_from_cache() if isinstance(filepath, str): # file exists and is cached ... elif filepath is _CACHED_NO_EXIST: # non-existence of file is cached ... else: # file is not cached ... ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#noexist-advanced
#noexist-advanced
.md
38_5
In practice, your cache should look like the following tree: ```text [ 96] . └── [ 160] models--julien-c--EsperBERTo-small β”œβ”€β”€ [ 160] blobs β”‚ β”œβ”€β”€ [321M] 403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd β”‚ β”œβ”€β”€ [ 398] 7cb18dc9bafbfcf74629a4b760af1b160957a83e β”‚ └── [1.4K] d7edf6bd2a681fb0175f7735299831ee1b22b812 β”œβ”€β”€ [ 96] refs β”‚ └── [ 40] main └── [ 128] snapshots β”œβ”€β”€ [ 128] 2439f60ef33a0d46d85da5001d52aeda5b00ce9f β”‚ β”œβ”€β”€ [ 52] README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812 β”‚ └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd └── [ 128] bbc77c8132af1cc5cf678da3f1ddf2de43606d48 β”œβ”€β”€ [ 52] README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#in-practice
#in-practice
.md
38_6
In order to have an efficient cache-system, `huggingface-hub` uses symlinks. However, symlinks are not supported on all machines. This is a known limitation especially on Windows. When this is the case, `huggingface_hub` do not use the `blobs/` directory but directly stores the files in the `snapshots/` directory instead. This workaround allows users to download and cache files from the Hub exactly the same way. Tools to inspect and delete the cache (see below) are also supported. However, the cache-system is less efficient as a single file might be downloaded several times if multiple revisions of the same repo is downloaded. If you want to benefit from the symlink-based cache-system on a Windows machine, you either need to [activate Developer Mode](https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development) or to run Python as an administrator. When symlinks are not supported, a warning message is displayed to the user to alert them they are using a degraded version of the cache-system. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable to true.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#limitations
#limitations
.md
38_7
In addition to caching files from the Hub, downstream libraries often requires to cache other files related to HF but not handled directly by `huggingface_hub` (example: file downloaded from GitHub, preprocessed data, logs,...). In order to cache those files, called `assets`, one can use [`cached_assets_path`]. This small helper generates paths in the HF cache in a unified way based on the name of the library requesting it and optionally on a namespace and a subfolder name. The goal is to let every downstream libraries manage its assets its own way (e.g. no rule on the structure) as long as it stays in the right assets folder. Those libraries can then leverage tools from `huggingface_hub` to manage the cache, in particular scanning and deleting parts of the assets from a CLI command. ```py from huggingface_hub import cached_assets_path assets_path = cached_assets_path(library_name="datasets", namespace="SQuAD", subfolder="download") something_path = assets_path / "something.json" # Do anything you like in your assets folder ! ``` <Tip> [`cached_assets_path`] is the recommended way to store assets but is not mandatory. If your library already uses its own cache, feel free to use it! </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#caching-assets
#caching-assets
.md
38_8
In practice, your assets cache should look like the following tree: ```text assets/ └── datasets/ β”‚ β”œβ”€β”€ SQuAD/ β”‚ β”‚ β”œβ”€β”€ downloaded/ β”‚ β”‚ β”œβ”€β”€ extracted/ β”‚ β”‚ └── processed/ β”‚ β”œβ”€β”€ Helsinki-NLP--tatoeba_mt/ β”‚ β”œβ”€β”€ downloaded/ β”‚ β”œβ”€β”€ extracted/ β”‚ └── processed/ └── transformers/ β”œβ”€β”€ default/ β”‚ β”œβ”€β”€ something/ β”œβ”€β”€ bert-base-cased/ β”‚ β”œβ”€β”€ default/ β”‚ └── training/ hub/ └── models--julien-c--EsperBERTo-small/ β”œβ”€β”€ blobs/ β”‚ β”œβ”€β”€ (...) β”‚ β”œβ”€β”€ (...) β”œβ”€β”€ refs/ β”‚ └── (...) └── [ 128] snapshots/ β”œβ”€β”€ 2439f60ef33a0d46d85da5001d52aeda5b00ce9f/ β”‚ β”œβ”€β”€ (...) └── bbc77c8132af1cc5cf678da3f1ddf2de43606d48/ └── (...) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#assets-in-practice
#assets-in-practice
.md
38_9
At the moment, cached files are never deleted from your local directory: when you download a new revision of a branch, previous files are kept in case you need them again. Therefore it can be useful to scan your cache directory in order to know which repos and revisions are taking the most disk space. `huggingface_hub` provides an helper to do so that can be used via `huggingface-cli` or in a python script.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#scan-your-cache
#scan-your-cache
.md
38_10
The easiest way to scan your HF cache-system is to use the `scan-cache` command from `huggingface-cli` tool. This command scans the cache and prints a report with information like repo id, repo type, disk usage, refs and full local path. The snippet below shows a scan report in a folder in which 4 models and 2 datasets are cached. ```text ➜ huggingface-cli scan-cache REPO ID REPO TYPE SIZE ON DISK NB FILES LAST_ACCESSED LAST_MODIFIED REFS LOCAL PATH --------------------------- --------- ------------ -------- ------------- ------------- ------------------- ------------------------------------------------------------------------- glue dataset 116.3K 15 4 days ago 4 days ago 2.4.0, main, 1.17.0 /home/wauplin/.cache/huggingface/hub/datasets--glue google/fleurs dataset 64.9M 6 1 week ago 1 week ago refs/pr/1, main /home/wauplin/.cache/huggingface/hub/datasets--google--fleurs Jean-Baptiste/camembert-ner model 441.0M 7 2 weeks ago 16 hours ago main /home/wauplin/.cache/huggingface/hub/models--Jean-Baptiste--camembert-ner bert-base-cased model 1.9G 13 1 week ago 2 years ago /home/wauplin/.cache/huggingface/hub/models--bert-base-cased t5-base model 10.1K 3 3 months ago 3 months ago main /home/wauplin/.cache/huggingface/hub/models--t5-base t5-small model 970.7M 11 3 days ago 3 days ago refs/pr/1, main /home/wauplin/.cache/huggingface/hub/models--t5-small Done in 0.0s. Scanned 6 repo(s) for a total of 3.4G. Got 1 warning(s) while scanning. Use -vvv to print details. ``` To get a more detailed report, use the `--verbose` option. For each repo, you get a list of all revisions that have been downloaded. As explained above, the files that don't change between 2 revisions are shared thanks to the symlinks. This means that the size of the repo on disk is expected to be less than the sum of the size of each of its revisions. For example, here `bert-base-cased` has 2 revisions of 1.4G and 1.5G but the total disk usage is only 1.9G. ```text ➜ huggingface-cli scan-cache -v REPO ID REPO TYPE REVISION SIZE ON DISK NB FILES LAST_MODIFIED REFS LOCAL PATH --------------------------- --------- ---------------------------------------- ------------ -------- ------------- ----------- ---------------------------------------------------------------------------------------------------------------------------- glue dataset 9338f7b671827df886678df2bdd7cc7b4f36dffd 97.7K 14 4 days ago main, 2.4.0 /home/wauplin/.cache/huggingface/hub/datasets--glue/snapshots/9338f7b671827df886678df2bdd7cc7b4f36dffd glue dataset f021ae41c879fcabcf823648ec685e3fead91fe7 97.8K 14 1 week ago 1.17.0 /home/wauplin/.cache/huggingface/hub/datasets--glue/snapshots/f021ae41c879fcabcf823648ec685e3fead91fe7 google/fleurs dataset 129b6e96cf1967cd5d2b9b6aec75ce6cce7c89e8 25.4K 3 2 weeks ago refs/pr/1 /home/wauplin/.cache/huggingface/hub/datasets--google--fleurs/snapshots/129b6e96cf1967cd5d2b9b6aec75ce6cce7c89e8 google/fleurs dataset 24f85a01eb955224ca3946e70050869c56446805 64.9M 4 1 week ago main /home/wauplin/.cache/huggingface/hub/datasets--google--fleurs/snapshots/24f85a01eb955224ca3946e70050869c56446805 Jean-Baptiste/camembert-ner model dbec8489a1c44ecad9da8a9185115bccabd799fe 441.0M 7 16 hours ago main /home/wauplin/.cache/huggingface/hub/models--Jean-Baptiste--camembert-ner/snapshots/dbec8489a1c44ecad9da8a9185115bccabd799fe bert-base-cased model 378aa1bda6387fd00e824948ebe3488630ad8565 1.5G 9 2 years ago /home/wauplin/.cache/huggingface/hub/models--bert-base-cased/snapshots/378aa1bda6387fd00e824948ebe3488630ad8565 bert-base-cased model a8d257ba9925ef39f3036bfc338acf5283c512d9 1.4G 9 3 days ago main /home/wauplin/.cache/huggingface/hub/models--bert-base-cased/snapshots/a8d257ba9925ef39f3036bfc338acf5283c512d9 t5-base model 23aa4f41cb7c08d4b05c8f327b22bfa0eb8c7ad9 10.1K 3 1 week ago main /home/wauplin/.cache/huggingface/hub/models--t5-base/snapshots/23aa4f41cb7c08d4b05c8f327b22bfa0eb8c7ad9 t5-small model 98ffebbb27340ec1b1abd7c45da12c253ee1882a 726.2M 6 1 week ago refs/pr/1 /home/wauplin/.cache/huggingface/hub/models--t5-small/snapshots/98ffebbb27340ec1b1abd7c45da12c253ee1882a t5-small model d0a119eedb3718e34c648e594394474cf95e0617 485.8M 6 4 weeks ago /home/wauplin/.cache/huggingface/hub/models--t5-small/snapshots/d0a119eedb3718e34c648e594394474cf95e0617 t5-small model d78aea13fa7ecd06c29e3e46195d6341255065d5 970.7M 9 1 week ago main /home/wauplin/.cache/huggingface/hub/models--t5-small/snapshots/d78aea13fa7ecd06c29e3e46195d6341255065d5 Done in 0.0s. Scanned 6 repo(s) for a total of 3.4G. Got 1 warning(s) while scanning. Use -vvv to print details. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#scan-cache-from-the-terminal
#scan-cache-from-the-terminal
.md
38_11
Since the output is in tabular format, you can combine it with any `grep`-like tools to filter the entries. Here is an example to filter only revisions from the "t5-small" model on a Unix-based machine. ```text ➜ eval "huggingface-cli scan-cache -v" | grep "t5-small" t5-small model 98ffebbb27340ec1b1abd7c45da12c253ee1882a 726.2M 6 1 week ago refs/pr/1 /home/wauplin/.cache/huggingface/hub/models--t5-small/snapshots/98ffebbb27340ec1b1abd7c45da12c253ee1882a t5-small model d0a119eedb3718e34c648e594394474cf95e0617 485.8M 6 4 weeks ago /home/wauplin/.cache/huggingface/hub/models--t5-small/snapshots/d0a119eedb3718e34c648e594394474cf95e0617 t5-small model d78aea13fa7ecd06c29e3e46195d6341255065d5 970.7M 9 1 week ago main /home/wauplin/.cache/huggingface/hub/models--t5-small/snapshots/d78aea13fa7ecd06c29e3e46195d6341255065d5 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#grep-example
#grep-example
.md
38_12
For a more advanced usage, use [`scan_cache_dir`] which is the python utility called by the CLI tool. You can use it to get a detailed report structured around 4 dataclasses: - [`HFCacheInfo`]: complete report returned by [`scan_cache_dir`] - [`CachedRepoInfo`]: information about a cached repo - [`CachedRevisionInfo`]: information about a cached revision (e.g. "snapshot") inside a repo - [`CachedFileInfo`]: information about a cached file in a snapshot Here is a simple usage example. See reference for details. ```py >>> from huggingface_hub import scan_cache_dir >>> hf_cache_info = scan_cache_dir() HFCacheInfo( size_on_disk=3398085269, repos=frozenset({ CachedRepoInfo( repo_id='t5-small', repo_type='model', repo_path=PosixPath(...), size_on_disk=970726914, nb_files=11, last_accessed=1662971707.3567169, last_modified=1662971107.3567169, revisions=frozenset({ CachedRevisionInfo( commit_hash='d78aea13fa7ecd06c29e3e46195d6341255065d5', size_on_disk=970726339, snapshot_path=PosixPath(...), # No `last_accessed` as blobs are shared among revisions last_modified=1662971107.3567169, files=frozenset({ CachedFileInfo( file_name='config.json', size_on_disk=1197 file_path=PosixPath(...), blob_path=PosixPath(...), blob_last_accessed=1662971707.3567169, blob_last_modified=1662971107.3567169, ), CachedFileInfo(...), ... }), ), CachedRevisionInfo(...), ... }), ), CachedRepoInfo(...), ... }), warnings=[ CorruptedCacheException("Snapshots dir doesn't exist in cached repo: ..."), CorruptedCacheException(...), ... ], ) ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#scan-cache-from-python
#scan-cache-from-python
.md
38_13
Scanning your cache is interesting but what you really want to do next is usually to delete some portions to free up some space on your drive. This is possible using the `delete-cache` CLI command. One can also programmatically use the [`~HFCacheInfo.delete_revisions`] helper from [`HFCacheInfo`] object returned when scanning the cache.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#clean-your-cache
#clean-your-cache
.md
38_14
To delete some cache, you need to pass a list of revisions to delete. The tool will define a strategy to free up the space based on this list. It returns a [`DeleteCacheStrategy`] object that describes which files and folders will be deleted. The [`DeleteCacheStrategy`] allows give you how much space is expected to be freed. Once you agree with the deletion, you must execute it to make the deletion effective. In order to avoid discrepancies, you cannot edit a strategy object manually. The strategy to delete revisions is the following: - the `snapshot` folder containing the revision symlinks is deleted. - blobs files that are targeted only by revisions to be deleted are deleted as well. - if a revision is linked to 1 or more `refs`, references are deleted. - if all revisions from a repo are deleted, the entire cached repository is deleted. <Tip> Revision hashes are unique across all repositories. This means you don't need to provide any `repo_id` or `repo_type` when removing revisions. </Tip> <Tip warning={true}> If a revision is not found in the cache, it will be silently ignored. Besides, if a file or folder cannot be found while trying to delete it, a warning will be logged but no error is thrown. The deletion continues for other paths contained in the [`DeleteCacheStrategy`] object. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#delete-strategy
#delete-strategy
.md
38_15
The easiest way to delete some revisions from your HF cache-system is to use the `delete-cache` command from `huggingface-cli` tool. The command has two modes. By default, a TUI (Terminal User Interface) is displayed to the user to select which revisions to delete. This TUI is currently in beta as it has not been tested on all platforms. If the TUI doesn't work on your machine, you can disable it using the `--disable-tui` flag.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#clean-cache-from-the-terminal
#clean-cache-from-the-terminal
.md
38_16
This is the default mode. To use it, you first need to install extra dependencies by running the following command: ``` pip install huggingface_hub["cli"] ``` Then run the command: ``` huggingface-cli delete-cache ``` You should now see a list of revisions that you can select/deselect: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/delete-cache-tui.png"/> </div> Instructions: - Press keyboard arrow keys `<up>` and `<down>` to move the cursor. - Press `<space>` to toggle (select/unselect) an item. - When a revision is selected, the first line is updated to show you how much space will be freed. - Press `<enter>` to confirm your selection. - If you want to cancel the operation and quit, you can select the first item ("None of the following"). If this item is selected, the delete process will be cancelled, no matter what other items are selected. Otherwise you can also press `<ctrl+c>` to quit the TUI. Once you've selected the revisions you want to delete and pressed `<enter>`, a last confirmation message will be prompted. Press `<enter>` again and the deletion will be effective. If you want to cancel, enter `n`. ```txt βœ— huggingface-cli delete-cache --dir ~/.cache/huggingface/hub ? Select revisions to delete: 2 revision(s) selected. ? 2 revisions selected counting for 3.1G. Confirm deletion ? Yes Start deletion. Done. Deleted 1 repo(s) and 0 revision(s) for a total of 3.1G. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#using-the-tui
#using-the-tui
.md
38_17
As mentioned above, the TUI mode is currently in beta and is optional. It may be the case that it doesn't work on your machine or that you don't find it convenient. Another approach is to use the `--disable-tui` flag. The process is very similar as you will be asked to manually review the list of revisions to delete. However, this manual step will not take place in the terminal directly but in a temporary file generated on the fly and that you can manually edit. This file has all the instructions you need in the header. Open it in your favorite text editor. To select/deselect a revision, simply comment/uncomment it with a `#`. Once the manual review is done and the file is edited, you can save it. Go back to your terminal and press `<enter>`. By default it will compute how much space would be freed with the updated list of revisions. You can continue to edit the file or confirm with `"y"`. ```sh huggingface-cli delete-cache --disable-tui ``` Example of command file: ```txt # INSTRUCTIONS # ------------ # This is a temporary file created by running `huggingface-cli delete-cache` with the # `--disable-tui` option. It contains a set of revisions that can be deleted from your # local cache directory. # # Please manually review the revisions you want to delete: # - Revision hashes can be commented out with '#'. # - Only non-commented revisions in this file will be deleted. # - Revision hashes that are removed from this file are ignored as well. # - If `CANCEL_DELETION` line is uncommented, the all cache deletion is cancelled and # no changes will be applied. # # Once you've manually reviewed this file, please confirm deletion in the terminal. This # file will be automatically removed once done. # ------------ # KILL SWITCH # ------------ # Un-comment following line to completely cancel the deletion process # CANCEL_DELETION # ------------ # REVISIONS # ------------ # Dataset chrisjay/crowd-speech-africa (761.7M, used 5 days ago) ebedcd8c55c90d39fd27126d29d8484566cd27ca # Refs: main # modified 5 days ago # Dataset oscar (3.3M, used 4 days ago) # 916f956518279c5e60c63902ebdf3ddf9fa9d629 # Refs: main # modified 4 days ago # Dataset wikiann (804.1K, used 2 weeks ago) 89d089624b6323d69dcd9e5eb2def0551887a73a # Refs: main # modified 2 weeks ago # Dataset z-uo/male-LJSpeech-italian (5.5G, used 5 days ago) # 9cfa5647b32c0a30d0adfca06bf198d82192a0d1 # Refs: main # modified 5 days ago ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#without-tui
#without-tui
.md
38_18
For more flexibility, you can also use the [`~HFCacheInfo.delete_revisions`] method programmatically. Here is a simple example. See reference for details. ```py >>> from huggingface_hub import scan_cache_dir >>> delete_strategy = scan_cache_dir().delete_revisions( ... "81fd1d6e7847c99f5862c9fb81387956d99ec7aa" ... "e2983b237dccf3ab4937c97fa717319a9ca1a96d", ... "6c0e6080953db56375760c0471a8c5f2929baf11", ... ) >>> print("Will free " + delete_strategy.expected_freed_size_str) Will free 8.6G >>> delete_strategy.execute() Cache deletion done. Saved 8.6G. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/manage-cache.md
https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache/#clean-cache-from-python
#clean-cache-from-python
.md
38_19
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/concepts/git_vs_http.md
https://huggingface.co/docs/huggingface_hub/en/concepts/git_vs_http/
.md
39_0
The `huggingface_hub` library is a library for interacting with the Hugging Face Hub, which is a collection of git-based repositories (models, datasets or Spaces). There are two main ways to access the Hub using `huggingface_hub`. The first approach, the so-called "git-based" approach, is led by the [`Repository`] class. This method uses a wrapper around the `git` command with additional functions specifically designed to interact with the Hub. The second option, called the "HTTP-based" approach, involves making HTTP requests using the [`HfApi`] client. Let's examine the pros and cons of each approach.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/concepts/git_vs_http.md
https://huggingface.co/docs/huggingface_hub/en/concepts/git_vs_http/#git-vs-http-paradigm
#git-vs-http-paradigm
.md
39_1
At first, `huggingface_hub` was mostly built around the [`Repository`] class. It provides Python wrappers for common `git` commands such as `"git add"`, `"git commit"`, `"git push"`, `"git tag"`, `"git checkout"`, etc. The library also helps with setting credentials and tracking large files, which are often used in machine learning repositories. Additionally, the library allows you to execute its methods in the background, making it useful for uploading data during training. The main advantage of using a [`Repository`] is that it allows you to maintain a local copy of the entire repository on your machine. This can also be a disadvantage as it requires you to constantly update and maintain this local copy. This is similar to traditional software development where each developer maintains their own local copy and pushes changes when working on a feature. However, in the context of machine learning, this may not always be necessary as users may only need to download weights for inference or convert weights from one format to another without the need to clone the entire repository. <Tip warning={true}> [`Repository`] is now deprecated in favor of the http-based alternatives. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. </Tip>
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/concepts/git_vs_http.md
https://huggingface.co/docs/huggingface_hub/en/concepts/git_vs_http/#repository-the-historical-git-based-approach
#repository-the-historical-git-based-approach
.md
39_2
The [`HfApi`] class was developed to provide an alternative to local git repositories, which can be cumbersome to maintain, especially when dealing with large models or datasets. The [`HfApi`] class offers the same functionality as git-based approaches, such as downloading and pushing files and creating branches and tags, but without the need for a local folder that needs to be kept in sync. In addition to the functionalities already provided by `git`, the [`HfApi`] class offers additional features, such as the ability to manage repos, download files using caching for efficient reuse, search the Hub for repos and metadata, access community features such as discussions, PRs, and comments, and configure Spaces hardware and secrets.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/concepts/git_vs_http.md
https://huggingface.co/docs/huggingface_hub/en/concepts/git_vs_http/#hfapi-a-flexible-and-convenient-http-client
#hfapi-a-flexible-and-convenient-http-client
.md
39_3
Overall, the **HTTP-based approach is the recommended way to use** `huggingface_hub` in all cases. [`HfApi`] allows to pull and push changes, work with PRs, tags and branches, interact with discussions and much more. Since the `0.16` release, the http-based methods can also run in the background, which was the last major advantage of the [`Repository`] class. However, not all git commands are available through [`HfApi`]. Some may never be implemented, but we are always trying to improve and close the gap. If you don't see your use case covered, please open [an issue on Github](https://github.com/huggingface/huggingface_hub)! We welcome feedback to help build the πŸ€— ecosystem with and for our users. This preference of the http-based [`HfApi`] over the git-based [`Repository`] does not mean that git versioning will disappear from the Hugging Face Hub anytime soon. It will always be possible to use `git` commands locally in workflows where it makes sense.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/concepts/git_vs_http.md
https://huggingface.co/docs/huggingface_hub/en/concepts/git_vs_http/#what-should-i-use--and-when-
#what-should-i-use--and-when-
.md
39_4