Spaces:
Runtime error
Runtime error
metadata
title: Image Colorization
emoji: π’
colorFrom: purple
colorTo: yellow
sdk: docker
pinned: false
license: apache-2.0
app_port: 5000
hugging face config
Image Colorization
==============================
An deep learning based Image Colorization project.
FINDINGS
- the task we want to learn is
image-colorization
but we can accompolish that by doing different types of tasks, I call these sub-task, in our content they could be likeregression based image colorization
,classification(by binning) based colorization
,GAN based colorization
,image colorization + scene classication(Let there be colors research paper did this)
. - based on analysis and while I was trying to come up with a project file structure I came to know that the data, model, loss, metrics, dataloader all these are very coupled while dealing with a particular task(
image-colorization
) but when we talk about a sub-task we have much more freedom. - within a sub-task(e.g., regression-unet-learner) we already made a set of rules and now we can use different models without changing the data, or we can change different datasets while using the same model, so it is important to fix the sub-task we want to do first.
- so making a folder for each sub-task seems right as a sub-task has high cohesion and no coupling with any other sub-task.
RULES
- use lower_snake_case for functions
- use lower_snake_case for file_name & folder names
- use UpperCamelCase for class names
- sub-task name should be in lower-kebab-case
Project File Structure
.
βββ LICENSE
βββ README.md <- The top-level README for developers using this project.
βββ data/
β βββ external <- Data from third party sources.
β βββ interim <- Intermediate data that has been transformed.
β βββ processed <- The final, canonical data sets for modeling.
β βββ raw <- The original, immutable data dump.
βββ models/ <- Trained models
βββ notebooks/ <- Jupyter notebooks
βββ configs/
β βββ experiment1.yaml
β βββ experiment2.yaml
β βββ experiment3.yaml
β βββ ...
βββ src/
βββ sub_task_1/
β βββ validate_config.py
β βββ data/
β β βββ register_datasets.py
β β βββ datasets/
β β β βββ dataset1.py
β β β βββ dataset2.py
β βββ model/
β β βββ base_model_interface.py
β β βββ register_models.py
β β βββ models/
β β β βββ simple_model.py
β β β βββ complex_model.py
β β βββ losses.py
β β βββ metrics.py
β β βββ callbacks.py
β β βββ dataloader.py
β βββ scripts/
β βββ create_dataset.py
β βββ create_model.py
βββ sub_task_2/
β βββ ...
βββ sub_task_3/
β βββ ...
βββ scripts/
β βββ create_sub_task.py
β βββ prepare_dataset.py
β βββ visualize_dataset.py
β βββ visualize_results.py
β βββ train.py
β βββ evaluate.py
β βββ inference.py
βββ utils/
βββ data_utils.py
βββ model_utils.py
Project based on the cookiecutter data science project template. #cookiecutterdatascience
Kaggle API docs:- https://github.com/Kaggle/kaggle-api/blob/main/docs/README.md
Kaggle Commands:-
- kaggle kernels pull anujpanthri/training-image-colorization-model -p kaggle/
- kaggle kernels push -p kaggle/
- echo "{"username":"$KAGGLE_USERNAME","key":"$KAGGLE_KEY"}" > kaggle.json
Docker Commands:-
- docker buildx build --secret id=COMET_API_KEY,env=COMET_API_KEY -t testcontainer
- docker run -it -p 5000:5000 -e COMET_API_KEY=$COMET_API_KEY testcontainer
Git Commands:-
- git lfs migrate info --everything --include=".zip,.png,*.jpg"
- git lfs migrate import --everything --include=".zip,.png,*.jpg"
Version 1:
- im gonna skip logging for now and rather use print statements