Dataset Viewer
Auto-converted to Parquet
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-20 06:26:59
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
429 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-20 06:26:36
card
stringlengths
11
1.01M
harikc456/ppo-LunarLander-v2
harikc456
"2023-03-23T06:09:34"
0
0
null
[ "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
"2023-03-22T19:20:11"
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -30.51 +/- 111.18 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'hg_ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.0004 'num_envs': 64 'num_steps': 1024 'anneal_lr': True 'gae': True 'gamma': 0.98 'gae_lambda': 0.98 'num_minibatches': 4 'update_epochs': 500 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'harikc456/ppo-LunarLander-v2' 'batch_size': 65536 'minibatch_size': 16384} ```
isspek/xlnet-base-cased_monkeypox_llama_4_2e-5_16
isspek
"2025-03-23T14:56:33"
5
0
transformers
[ "transformers", "safetensors", "xlnet", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-26T14:46:19"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mrferr3t/c6979fb5-784a-4a2b-9567-cfb294aa1b39
mrferr3t
"2025-02-07T13:13:09"
6
0
peft
[ "peft", "safetensors", "opt", "generated_from_trainer", "base_model:facebook/opt-125m", "base_model:adapter:facebook/opt-125m", "license:other", "region:us" ]
null
"2025-02-07T13:09:35"
--- library_name: peft license: other base_model: facebook/opt-125m tags: - generated_from_trainer model-index: - name: miner_id_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: false base_model: facebook/opt-125m bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 956ec78c9a13d665_train_data.json ds_type: json format: custom path: /workspace/input_data/956ec78c9a13d665_train_data.json type: field_input: text field_instruction: question field_output: attempt format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: eval_max_new_tokens: 128 eval_steps: eval_strategy: null flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0004 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 20 micro_batch_size: 16 mlflow_experiment_name: /tmp/956ec78c9a13d665_train_data.json model_type: AutoModelForCausalLM num_epochs: 100 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: saves_per_epoch: 0 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: disabled wandb_name: c41a388c-1b8f-4331-9834-02ca536326e8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c41a388c-1b8f-4331-9834-02ca536326e8 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # miner_id_24 This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 20 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
imnaresh/zu2502028
imnaresh
"2025-02-06T17:14:45"
12
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-06T16:03:39"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: zu2502028 --- # Zu2502028 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `zu2502028` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('imnaresh/zu2502028', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-slerp
autoprogrammer
"2024-11-21T19:18:11"
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-21T19:15:35"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
N0de/ppo-SnowballTarget
N0de
"2024-03-27T07:10:34"
17
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
"2024-03-27T07:10:26"
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: N0de/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
yuntian-deng/gpt2-explicit-cot-multiplication-20-digits
yuntian-deng
"2024-07-19T01:08:03"
148
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-19T00:47:14"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fizzarolli/lust-7b
Fizzarolli
"2024-04-16T14:02:36"
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "roleplay", "conversational", "trl", "unsloth", "en", "dataset:Fizzarolli/rpguild_processed", "dataset:Fizzarolli/bluemoon_processeed", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-15T01:42:29"
--- license: apache-2.0 datasets: - Fizzarolli/rpguild_processed - Fizzarolli/bluemoon_processeed language: - en library_name: transformers tags: - roleplay - conversational - trl - unsloth --- # lust-7b experimental rp model. ## prompt format this one's a bit funky. ``` <|description|>Character Character is blah blah blah</s> <|description|>Character 2 Character 2 is blah blah blah (optional to make more than one)</s> <|narrator|> Describe what you want to happen in the scenario (I dont even know if this works) <|message|>Character Character does blah blah blah</s> <|message|>Character 2 Character 2 does blah blah blah</s> <|message|>Character [start model generation here!] ``` sillytavern templates: TODO ## quants gguf: https://huggingface.co/mradermacher/lust-7b-GGUF (thanks @mradermacher!)
albertus-sussex/veriscrape-fixed-simcse-auto-reference_5_to_verify_5-fold-6
albertus-sussex
"2025-04-01T13:08:05"
0
0
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-04-01T13:07:32"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p> </div> </main> </body> </html>
sail-rvc/linkara
sail-rvc
"2023-07-14T07:40:22"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:40:00"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # linkara ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:40:21 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
visdata/b14_3
visdata
"2025-01-06T12:39:07"
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-06T12:12:11"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RishikAngara/llama-finetuned
RishikAngara
"2025-03-12T18:36:17"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
"2025-03-12T18:36:13"
--- base_model: meta-llama/Llama-3.2-3B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
ShynBui/s5
ShynBui
"2023-08-04T18:03:50"
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2023-08-04T15:56:43"
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: s5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # s5 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad_v2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
Azure99/blossom-v3-mistral-7b
Azure99
"2024-02-20T02:38:49"
1,677
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "zh", "en", "dataset:Azure99/blossom-chat-v1", "dataset:Azure99/blossom-math-v2", "dataset:Azure99/blossom-wizard-v1", "dataset:Azure99/blossom-orca-v1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-11-20T05:05:00"
--- license: apache-2.0 datasets: - Azure99/blossom-chat-v1 - Azure99/blossom-math-v2 - Azure99/blossom-wizard-v1 - Azure99/blossom-orca-v1 language: - zh - en --- # **BLOSSOM-v3-mistral-7b** [💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/) ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Mistral-7B-v0.1 pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 100K Wizard, 100K Orca single-turn instruction datasets, training for 1 epoch; the second stage used a 2K Blossom math reasoning dataset, 50K Blossom chat multi-turn dialogue dataset, and 1% randomly sampled data from the first stage, training for 3 epochs. Note: The Mistral-7B-v0.1 pre-trained model is somewhat lacking in Chinese knowledge, so for Chinese scenarios, it is recommended to use [blossom-v3-baichuan2-7b](https://huggingface.co/Azure99/blossom-v3-baichuan2-7b). ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today? ``` Multi-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today?</s> |Human|: Generate a random number using python |Bot|: ``` Note: At the end of the Bot's output in the historical conversation, append a `</s>`.
aseratus1/226a44b2-f690-4bd5-8739-d700b4347af9
aseratus1
"2025-02-02T01:09:57"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/tinyllama-chat", "base_model:adapter:unsloth/tinyllama-chat", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-02-02T00:51:13"
--- library_name: peft license: apache-2.0 base_model: unsloth/tinyllama-chat tags: - axolotl - generated_from_trainer model-index: - name: 226a44b2-f690-4bd5-8739-d700b4347af9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/tinyllama-chat bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 474b74e7071ffe0a_train_data.json ds_type: json format: custom path: /workspace/input_data/474b74e7071ffe0a_train_data.json type: field_input: intent field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 2 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: aseratus1/226a44b2-f690-4bd5-8739-d700b4347af9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/474b74e7071ffe0a_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8ee7e4f0-9971-4026-8d8b-539310b29b24 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8ee7e4f0-9971-4026-8d8b-539310b29b24 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 226a44b2-f690-4bd5-8739-d700b4347af9 This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2515 | 0.0175 | 200 | 1.0976 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
wclzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_silky_boar
wclzz
"2025-04-01T17:31:18"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am large silky boar", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-01T17:29:39"
Temporary Redirect. Redirecting to /api/resolve-cache/models/wclzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_silky_boar/52368b4ef5ba7fb558e26802b6ba7b3734a3138b/README.md?%2Fwclzz%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-large_silky_boar%2Fresolve%2Fmain%2FREADME.md=&etag=%223280dd157691968ea39bc4f3982e9445483587a3%22
JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector
JonatanGk
"2023-05-09T17:54:48"
12
1
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "catalan", "ca", "dataset:catalonia_independence", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:04"
--- license: apache-2.0 language: ca tags: - catalan datasets: - catalonia_independence metrics: - accuracy model-index: - name: roberta-base-ca-finetuned-mnli results: - task: name: Text Classification type: text-classification dataset: name: catalonia_independence type: catalonia_independence args: catalan metrics: - name: Accuracy type: accuracy value: 0.7611940298507462 - task: type: text-classification name: Text Classification dataset: name: catalonia_independence type: catalonia_independence config: catalan split: test metrics: - name: Accuracy type: accuracy value: 0.7208955223880597 verified: true - name: Precision Macro type: precision value: 0.7532458247651523 verified: true - name: Precision Micro type: precision value: 0.7208955223880597 verified: true - name: Precision Weighted type: precision value: 0.7367396361532118 verified: true - name: Recall Macro type: recall value: 0.6880645531209203 verified: true - name: Recall Micro type: recall value: 0.7208955223880597 verified: true - name: Recall Weighted type: recall value: 0.7208955223880597 verified: true - name: F1 Macro type: f1 value: 0.7013044744309381 verified: true - name: F1 Micro type: f1 value: 0.7208955223880597 verified: true - name: F1 Weighted type: f1 value: 0.713640086434487 verified: true - name: loss type: loss value: 0.6895929574966431 verified: true widget: - text: "Puigdemont, a l'estat espanyol: Quatre anys despr\xE9s, ens hem guanyat el\ \ dret a dir prou" - text: "Llarena demana la detenci\xF3 de Com\xEDn i Ponsat\xED aprofitant que s\xF3\ n a It\xE0lia amb Puigdemont" - text: "Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina\ \ com el doble sentiment identitari. \xC9s a dir, se senten tant catalans com\ \ espanyols. 1 de cada cinc, en canvi, t\xE9 un sentiment excloent, nom\xE9s se\ \ senten catalans, i un 4% sol espanyol." --- # roberta-base-ca-finetuned-catalonia-independence-detector This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the catalonia_independence dataset. It achieves the following results on the evaluation set: - Loss: 0.6065 - Accuracy: 0.7612 <details> ## Training and evaluation data The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 377 | 0.6311 | 0.7453 | | 0.7393 | 2.0 | 754 | 0.6065 | 0.7612 | | 0.5019 | 3.0 | 1131 | 0.6340 | 0.7547 | | 0.3837 | 4.0 | 1508 | 0.6777 | 0.7597 | | 0.3837 | 5.0 | 1885 | 0.7232 | 0.7582 | </details> ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline model_path = "JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector" independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path) independence_analysis( "Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol." ) # Output: [{'label': 'AGAINST', 'score': 0.7457581758499146}] independence_analysis( "Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont" ) # Output: [{'label': 'NEUTRAL', 'score': 0.7436802983283997}] independence_analysis( "Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou" ) # Output: [{'label': 'FAVOR', 'score': 0.9040119647979736}] ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(CATALAN).ipynb#scrollTo=j29NHJtOyAVU) ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3 ## Citation Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;) > Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C. > Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
SeyedHosseini360/w2v-bert-2.0-mongolian-colab-CV16.0
SeyedHosseini360
"2024-07-20T09:28:11"
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-07-14T07:28:10"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kimjaewon/paligemma2-cord-finetuned
kimjaewon
"2025-03-26T01:33:45"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-26T01:23:09"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ikmalalfaozi/layoutlm-funsd-tf
ikmalalfaozi
"2024-05-24T04:21:12"
63
0
transformers
[ "transformers", "tf", "tensorboard", "layoutlm", "token-classification", "generated_from_keras_callback", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-05-24T04:09:16"
--- license: mit tags: - generated_from_keras_callback base_model: microsoft/layoutlm-base-uncased model-index: - name: layoutlm-funsd-tf results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd-tf This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2511 - Validation Loss: 0.6882 - Train Overall Precision: 0.7189 - Train Overall Recall: 0.7878 - Train Overall F1: 0.7517 - Train Overall Accuracy: 0.8039 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch | |:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:| | 1.6546 | 1.3264 | 0.3384 | 0.3708 | 0.3538 | 0.5774 | 0 | | 1.0901 | 0.8303 | 0.6013 | 0.6508 | 0.6251 | 0.7392 | 1 | | 0.7169 | 0.6666 | 0.6778 | 0.7441 | 0.7094 | 0.7864 | 2 | | 0.5285 | 0.6429 | 0.6859 | 0.7702 | 0.7256 | 0.8022 | 3 | | 0.4270 | 0.6216 | 0.7089 | 0.7832 | 0.7442 | 0.8092 | 4 | | 0.3451 | 0.6699 | 0.7038 | 0.7832 | 0.7414 | 0.7972 | 5 | | 0.2867 | 0.6886 | 0.7203 | 0.7868 | 0.7520 | 0.7965 | 6 | | 0.2511 | 0.6882 | 0.7189 | 0.7878 | 0.7517 | 0.8039 | 7 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
calcuis/phi4
calcuis
"2025-01-26T23:37:13"
8,170
0
null
[ "gguf", "phi4", "gguf-connector", "text-generation", "en", "arxiv:2412.08905", "base_model:microsoft/phi-4-gguf", "base_model:quantized:microsoft/phi-4-gguf", "doi:10.57967/hf/4273", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-01-22T01:51:34"
--- license: mit language: - en base_model: - microsoft/phi-4-gguf pipeline_tag: text-generation tags: - phi4 - gguf-connector --- # GGUF quantized and bug fixed version of **phi4** ### review - bug fixed for: "ResponseError: llama runner process has terminated: GGML_ASSERT(hparams.n_swa > 0) failed" - define the architecture (from none) to llama; all works right away ### run the model use any gguf connector to interact with gguf file(s), i.e., [connector](https://pypi.org/project/gguf-connector/) ### reference - base model: microsoft/[phi-4](https://huggingface.co/microsoft/phi-4) - bug fixed following the guide written by [unsloth](https://unsloth.ai/blog/phi4) - tool used for quantization: [cutter](https://pypi.org/project/gguf-cutter) ### citation [Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905) ### appendices: model summary and quality (written by microsoft) #### model summary | | | |-------------------------|-------------------------------------------------------------------------------| | **Developers** | Microsoft Research | | **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures | | **Architecture** | 14B parameters, dense decoder-only Transformer model | | **Inputs** | Text, best suited for prompts in the chat format | | **Context length** | 16K tokens | | **GPUs** | 1920 H100-80G | | **Training time** | 21 days | | **Training data** | 9.8T tokens | | **Outputs** | Generated text in response to input | | **Dates** | October 2024 – November 2024 | | **Status** | Static model trained on an offline dataset with cutoff dates of June 2024 and earlier for publicly available data | | **Release date** | December 12, 2024 | | **License** | MIT | #### model quality to understand the capabilities, we (here refer to microsoft side) compare `phi-4` with a set of models over OpenAI’s SimpleEval benchmark; at the high-level overview of the model quality on representative benchmarks; for the table below, higher numbers indicate better performance: | **Category** | **Benchmark** | **phi-4** (14B) | **phi-3** (14B) | **Qwen 2.5** (14B instruct) | **GPT-4o-mini** | **Llama-3.3** (70B instruct) | **Qwen 2.5** (72B instruct) | **GPT-4o** | |------------------------------|---------------|-----------|-----------------|----------------------|----------------------|--------------------|-------------------|-----------------| | Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | **88.1** | | Science | GPQA | **56.1** | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 | | Math | MGSM<br>MATH | 80.6<br>**80.4** | 53.5<br>44.6 | 79.6<br>75.6 | 86.5<br>73.0 | 89.1<br>66.3* | 87.3<br>80.0 | **90.4**<br>74.6 | | Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9* | 80.4 | **90.6** | | Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | **39.4** | | Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | **90.2** | 76.7 | 80.9 | \* these scores are lower than those reported by Meta, perhaps because simple-evals has a strict formatting requirement that Llama models have particular trouble following.
souging/2583c680-ffb3-4037-9d5c-821d6bfdcad2
souging
"2025-03-26T12:19:15"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:heegyu/WizardVicuna-open-llama-3b-v2", "base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2", "license:apache-2.0", "region:us" ]
null
"2025-03-26T05:15:39"
--- library_name: peft license: apache-2.0 base_model: heegyu/WizardVicuna-open-llama-3b-v2 tags: - axolotl - generated_from_trainer model-index: - name: 2583c680-ffb3-4037-9d5c-821d6bfdcad2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: heegyu/WizardVicuna-open-llama-3b-v2 bf16: auto dataset_prepared_path: null datasets: - data_files: - a524fe0959fac29c_train_data.json ds_type: json format: custom path: /root/G.O.D-test/core/data/a524fe0959fac29c_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null eval_max_new_tokens: 128 eval_steps: 0 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: souging/2583c680-ffb3-4037-9d5c-821d6bfdcad2 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000202 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 5 mlflow_experiment_name: /tmp/a524fe0959fac29c_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: false resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 0 saves_per_epoch: null sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true warmup_steps: 100 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 2583c680-ffb3-4037-9d5c-821d6bfdcad2 This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000202 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 160 - total_eval_batch_size: 40 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.3
rafiulrumy/wav2vec2-large-xlsr-hindi-demo-colab
rafiulrumy
"2021-12-08T07:47:56"
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-hindi-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-hindi-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
spacemanidol/flan-t5-base-5-6-xsum
spacemanidol
"2023-03-10T22:50:14"
103
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-02-28T18:21:32"
--- tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: base-5-6 results: - task: name: Summarization type: summarization dataset: name: xsum type: xsum config: default split: validation args: default metrics: - name: Rouge1 type: rouge value: 39.0404 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base-5-6 This model is a fine-tuned version of [x/base-5-6/](https://huggingface.co/x/base-5-6/) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.6972 - Rouge1: 39.0404 - Rouge2: 15.9169 - Rougel: 31.2288 - Rougelsum: 31.2183 - Gen Len: 26.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.10.0 - Tokenizers 0.13.2
Erenosxx/whisper-turbo-tr_combined_10_percent
Erenosxx
"2025-03-27T21:29:59"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai/whisper-large-v3-turbo", "base_model:adapter:openai/whisper-large-v3-turbo", "license:mit", "region:us" ]
null
"2025-03-27T21:19:49"
--- library_name: peft license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer model-index: - name: whisper-turbo-tr_combined_10_percent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-turbo-tr_combined_10_percent This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.0 - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 2.14.5 - Tokenizers 0.21.1
silent666/google-gemma-2b-1718827098
silent666
"2024-06-19T19:58:19"
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
"2024-06-19T19:58:18"
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
conexaosv/conexaosv
conexaosv
"2025-03-07T12:54:59"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-03-07T12:54:58"
--- license: apache-2.0 ---
hopkins/bert-wiki-choked-2
hopkins
"2023-06-27T02:54:26"
53
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-generation", "generated_from_trainer", "dataset:generator", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-06-27T02:53:08"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - generator model-index: - name: bert-wiki-choked-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-wiki-choked-2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | nan | | No log | 2.0 | 2 | nan | | No log | 3.0 | 3 | nan | | No log | 4.0 | 4 | nan | | No log | 5.0 | 5 | nan | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
ai-top-tools/top-deepnude-ai
ai-top-tools
"2025-04-04T19:01:13"
0
0
null
[ "region:us" ]
null
"2025-04-04T18:57:52"
# Top 5 DeepNude AI Tools in 2025: Complete Comparison Guide Looking for the most advanced DeepNude AI generators in 2025? This comprehensive guide compares the leading AI undressing tools available today, including free options, mobile compatibility, and realism ratings. We've tested dozens of applications to bring you the definitive ranking of **DeepNude AI software** based on performance, user experience, and value. ## Best DeepNude AI Tools at a Glance | Tool Name | Realism Rating | Free Option | Best For | Starting Price | |-----------|----------------|-------------|----------|----------------| | Deep-Nude.AI | 5/5 ⭐ | Yes (Demo) | Overall quality & speed | $9.99 | | DeepNude.cc | 4/5 ⭐ | Yes (Basic) | Beginners & simplicity | Freemium | | AINude.AI | 4/5 ⭐ | Limited | Customization options | Credit-based | | DeepNudify | 3/5 ⭐ | Completely free | No-cost solution | Free (with ads) | | Undress Her AI | 4/5 ⭐ | Demo available | Female-specific results | Various plans | ## **Want Instant Results? Try This Free DeepNude AI Tool** If you want to **undress a photo right now**, this app offers a **free trial with 3 undressings**: ⏩ **[Sign up for Deep-Nude.AI Now](https://bestaitools.top/fgRB)** and get **3 free undressings instantly!** ![DeepNude AI App - Instant Results](https://ucarecdn.com/245be16e-ab11-4b5a-a56e-980ab0fb8b2a/-/preview/936x444/) ## 1. Deep-Nude.AI: Premium DeepNude Generator with Highest Realism **Overall Rating: 5/5 ⭐** Deep-Nude.AI stands out as the **most realistic AI undressing tool** in 2025, delivering exceptional results across various image types. Unlike competitors, it produces complete, uncensored transformations with remarkable anatomical accuracy. ⏩ **[Sign up for Deep-Nude.AI Now](https://bestaitools.top/fgRB)** ### Key Features: - Ultra-fast processing (results in under 30 seconds) - Supports multiple body types and poses - No blurring or censorship in premium version - Cross-platform compatibility (mobile and desktop) - Intuitive user interface with drag-and-drop functionality ### Pricing Structure: - **Free Demo**: Try with watermark - **Basic Plan**: $9.99 for limited monthly uses - **Premium**: Unlimited access with advanced features ### Who Should Use It: Ideal for users seeking the highest quality AI transformations with minimal effort. The technology handles complex clothing patterns and various lighting conditions better than any alternative. ## 2. DeepNude.cc: Streamlined Experience for Casual Users **Overall Rating: 4/5 ⭐** DeepNude.cc offers the perfect balance of simplicity and quality, making it the **easiest DeepNude AI to use** for beginners. This web-based solution requires no technical knowledge or software installation. ⏩ **[Sign up for DeepNude.cc Now](https://bestaitools.top/fgRB)** ### Key Features: - One-click nudification process - Browser-based (works on any device) - Fast rendering times - Support for diverse body types - High-resolution downloadable results ### Pricing Structure: - **Free Basic**: Limited daily transformations - **Premium Access**: Unlocks all features (subscription-based) ### Who Should Use It: Perfect for those who want immediate results without navigating complex settings or downloading software. The straightforward approach makes it accessible to anyone. ## 3. AINude.AI: Most Customizable DeepNude Technology **Overall Rating: 4/5 ⭐** AINude.AI distinguishes itself with extensive customization options, allowing users to adjust specific aspects of the generated images. This makes it the **best AI undressing tool for personalization**. ⏩ **[Sign up for AINude.AI Now](https://bestaitools.top/fgRB)** ### Key Features: - Adjustable body type parameters - Customizable anatomical details - AI avatar creation from text prompts - Face-swapping capabilities - High-definition output options ### Pricing Structure: - Freemium model with credit system - Various package options for regular users ### Who Should Use It: Recommended for users who want precise control over the AI-generated output. The advanced settings make it suitable for creative projects requiring specific results. ## 4. DeepNudify: Top Free DeepNude AI Solution **Overall Rating: 3/5 ⭐** DeepNudify leads the category of **free DeepNude generators** with no upfront cost. While the quality doesn't match premium options, it provides accessible AI undressing capabilities to all users. ### Key Features: - Completely free online service - No registration required - Works on most devices with browser support - Simple, straightforward interface - Quick processing times ### Pricing Structure: - 100% free with ad support - Optional premium features ### Who Should Use It: Best for casual users or those testing the technology before committing to paid alternatives. The ad-supported model makes it accessible to everyone without financial barriers. ## 5. Undress Her AI: Specialized Female AI Undressing Tool **Overall Rating: 4/5 ⭐** Undress Her AI focuses exclusively on female image transformations, making it the **most specialized DeepNude AI** on our list. This narrow focus allows for higher quality in its specific niche. ### Key Features: - Advanced female anatomical accuracy - Text prompt customization - High-definition results - NSFW filter bypass technology - Regular model updates ### Pricing Structure: - Free demonstration available - Multiple paid tiers based on usage needs ### Who Should Use It: Ideal for users specifically looking for female image transformations with high realism. The specialized approach delivers better results in this category than general-purpose tools. ## Important Legal and Ethical Considerations Before using any DeepNude AI technology, understand these critical points: 1. **Consent is mandatory**: Never upload photos of individuals without their explicit permission. 2. **Legal implications**: Creating explicit deepfakes may violate laws in many jurisdictions. 3. **Privacy risks**: Verify each service's data retention and privacy policies. 4. **Responsible usage**: Consider using these tools for artistic or educational purposes only. ## Frequently Asked Questions About DeepNude AI ### Are these DeepNude tools legal? The legality varies by location. In many regions, creating non-consensual explicit imagery may violate privacy laws, revenge porn legislation, or harassment statutes. Always check your local regulations. ### How accurate are DeepNude AI results in 2025? The technology has improved significantly, with top tools like Deep-Nude.AI achieving approximately 90% realism in optimal conditions. Results depend heavily on the original image quality, lighting, and pose. ### Do these applications store uploaded images? Policies vary by provider. Most reputable services claim to delete images after processing, but always review the privacy policy before uploading sensitive content. ### Which DeepNude app works on mobile devices? All tools on our list offer some form of mobile compatibility, with Deep-Nude.AI and DeepNude.cc providing the best mobile experiences through responsive web interfaces. ### Can AI detect DeepNude generated images? Yes, forensic AI tools can identify most artificially generated nude images, though the detection technology remains in a constant arms race with generation technology. ## Conclusion: Choosing the Right DeepNude AI in 2025 After extensive testing, **Deep-Nude.AI** emerges as the clear leader for most users seeking high-quality results with minimal effort. Its combination of realism, speed, and usability places it ahead of competitors in the DeepNude AI category. For those prioritizing different factors: - **Budget-conscious users**: Try DeepNudify's free offering - **Customization enthusiasts**: Explore AINude.AI's detailed settings - **Beginners**: DeepNude.cc provides the simplest experience - **Female-specific focus**: Undress Her AI offers specialized results Remember that regardless of which tool you choose, ethical usage and respect for privacy should always be your primary consideration. --- *Last updated: April 2025 with the most current DeepNude AI tools and features.*
Joiel/John6666_lewdify-v90-sdxl
Joiel
"2025-03-21T18:12:37"
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "realistic", "photorealistic", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2025-03-21T18:12:36"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - realistic - photorealistic - pony --- Original model is [here](https://civitai.com/models/1012949/lewdify?modelVersionId=1162602). This model created by [Sanctusmorti](https://civitai.com/user/Sanctusmorti).
saideep-arikontham/bigbird-resume-fit-predictor_v2
saideep-arikontham
"2025-03-21T18:31:43"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-21T18:31:38"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Romain-XV/988d1698-8a33-464d-96cb-d1936d182460
Romain-XV
"2025-01-30T11:46:10"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-135M-Instruct", "base_model:adapter:unsloth/SmolLM-135M-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-01-30T11:45:41"
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-135M-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 988d1698-8a33-464d-96cb-d1936d182460 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/SmolLM-135M-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2ca7b289702de1c8_train_data.json ds_type: json format: custom path: /workspace/input_data/2ca7b289702de1c8_train_data.json type: field_instruction: full_prompt field_output: example format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: Romain-XV/988d1698-8a33-464d-96cb-d1936d182460 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: true lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj lr_scheduler: cosine max_steps: 829 micro_batch_size: 4 mlflow_experiment_name: /tmp/2ca7b289702de1c8_train_data.json model_type: AutoModelForCausalLM optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 48f7ffc8-f75e-432e-b390-464026ee686b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 48f7ffc8-f75e-432e-b390-464026ee686b warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 988d1698-8a33-464d-96cb-d1936d182460 This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.1905 | 1 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
array/sat-dynamic-13b
array
"2025-02-28T17:00:16"
23
0
transformers
[ "transformers", "safetensors", "dataset:array/SAT", "arxiv:2412.07755", "license:mit", "endpoints_compatible", "region:us" ]
null
"2025-02-27T23:09:22"
--- library_name: transformers license: mit datasets: - array/SAT --- # Model Card for Model ID Please check https://github.com/arijitray1993/SAT on how to run inference with this model. If you use the model, please cite: ``` @misc{ray2024satspatialaptitudetraining, title={SAT: Spatial Aptitude Training for Multimodal Language Models}, author={Arijit Ray and Jiafei Duan and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko}, year={2024}, eprint={2412.07755}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2412.07755}, } ```
gvo1112/task-1-Qwen-Qwen2.5-7B-Instruct-1736202741
gvo1112
"2025-01-06T22:32:22"
23
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "region:us" ]
null
"2025-01-06T22:32:21"
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards

This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
1,822

Collection including librarian-bots/model_cards_with_metadata