Search is not available for this dataset
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 351
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nekuromento/watt-tool-8B-Q4_K_M-GGUF | Nekuromento | "2025-01-14T18:26:57Z" | 70 | 1 | null | [
"gguf",
"function-calling",
"tool-use",
"llama",
"bfcl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:watt-ai/watt-tool-8B",
"base_model:quantized:watt-ai/watt-tool-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-14T18:26:35Z" | ---
license: apache-2.0
language:
- en
base_model: watt-ai/watt-tool-8B
tags:
- function-calling
- tool-use
- llama
- bfcl
- llama-cpp
- gguf-my-repo
---
# Nekuromento/watt-tool-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`watt-ai/watt-tool-8B`](https://huggingface.co/watt-ai/watt-tool-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/watt-ai/watt-tool-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nekuromento/watt-tool-8B-Q4_K_M-GGUF --hf-file watt-tool-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nekuromento/watt-tool-8B-Q4_K_M-GGUF --hf-file watt-tool-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nekuromento/watt-tool-8B-Q4_K_M-GGUF --hf-file watt-tool-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nekuromento/watt-tool-8B-Q4_K_M-GGUF --hf-file watt-tool-8b-q4_k_m.gguf -c 2048
```
|
johnsutor/Llama-3-8B-Instruct_ties-density-0.1 | johnsutor | "2024-06-07T17:23:23Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:merge:DeepMount00/Llama-3-8b-Ita",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:merge:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"base_model:merge:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"base_model:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"base_model:merge:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:merge:nbeerbower/llama-3-gutenberg-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-07T17:16:41Z" | ---
base_model:
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- DeepMount00/Llama-3-8b-Ita
- nbeerbower/llama-3-gutenberg-8B
- jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
license: apache-2.0
tags:
- mergekit
- merge
---
# Model Merge Parameters
Base model: meta-llama/Meta-Llama-3-8B-Instruct
Models: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
DeepMount00/Llama-3-8b-Ita
nbeerbower/llama-3-gutenberg-8B
jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
meta-llama/Meta-Llama-3-8B-Instruct
Merge method: ties
Random seed: 42
density: 0.1
normalize: true
weight: 1.0
|
casehold/bert-double | casehold | "2021-07-02T05:54:19Z" | 18 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"pretraining",
"fill-mask",
"en",
"arxiv:2104.08671",
"arxiv:1810.04805",
"arxiv:1903.10676",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: en
pipeline_tag: fill-mask
---
### BERT (double)
Model and tokenizer files for BERT (double) model from [When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset](https://arxiv.org/abs/2104.08671).
### Training Data
BERT (double) is pretrained using the same English Wikipedia corpus that the base BERT model (uncased, 110M parameters), [bert-base-uncased](https://huggingface.co/bert-base-uncased), was pretrained on. For more information on the pretraining corpus, refer to the [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) paper.
### Training Objective
This model is initialized with the base BERT model (uncased, 110M parameters), [bert-base-uncased](https://huggingface.co/bert-base-uncased), and trained for an additional 1M steps on the MLM and NSP objective.
This facilitates a direct comparison to our BERT-based models for the legal domain, which are also pretrained for 2M total steps.
- Legal-BERT: zlucia/legalbert (https://huggingface.co/zlucia/legalbert)
- Custom Legal-BERT: zlucia/custom-legalbert (https://huggingface.co/zlucia/custom-legalbert)
### Usage
Please see the [casehold repository](https://github.com/reglab/casehold) for scripts that support computing pretrain loss and finetuning on BERT (double) for classification and multiple choice tasks described in the paper: Overruling, Terms of Service, CaseHOLD.
See `demo.ipynb` in the casehold repository for details on calculating domain specificity (DS) scores for tasks or task examples by taking the difference in pretrain loss on BERT (double) and Legal-BERT. DS score may be readily extended to estimate domain specificity of tasks in other domains using BERT (double) and existing pretrained models (e.g., [SciBERT](https://arxiv.org/abs/1903.10676)).
### Citation
@inproceedings{zhengguha2021,
title={When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset},
author={Lucia Zheng and Neel Guha and Brandon R. Anderson and Peter Henderson and Daniel E. Ho},
year={2021},
eprint={2104.08671},
archivePrefix={arXiv},
primaryClass={cs.CL},
booktitle={Proceedings of the 18th International Conference on Artificial Intelligence and Law},
publisher={Association for Computing Machinery}
}
Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, and Daniel E. Ho. 2021. When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset. In *Proceedings of the 18th International Conference on Artificial Intelligence and Law (ICAIL '21)*, June 21-25, 2021, São Paulo, Brazil. ACM Inc., New York, NY, (in press). arXiv: [2104.08671 [cs.CL]](https://arxiv.org/abs/2104.08671). |
cyf1215/deepseek_sql_model | cyf1215 | "2025-02-17T02:31:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-17T02:30:31Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cyf1215
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lwasinam/voicera | Lwasinam | "2024-07-30T15:00:02Z" | 14 | 19 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"speech",
"text-to-speech",
"dataset:openslr/librispeech_asr",
"dataset:MushanW/GLOBE",
"dataset:MikhailT/hifi-tts",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2024-07-26T10:08:12Z" | ---
library_name: transformers
tags:
- speech
- text-to-speech
datasets:
- openslr/librispeech_asr
- MushanW/GLOBE
- MikhailT/hifi-tts
---
# Model Card for Model ID
Voicera is a AR text-to-speech model trained on ~1000hrs of speech data.
speech is converted to discrete tokens using "Multi-Scale Neural Audio Codec (SNAC)" model
**NB: This is not a SOTA model, and not accuarate enough for production usecase**
## Model Details
### Model Description
"Voicera" is a text-to-speech (TTS) model designed for generating speech from written text.
It uses a GPT-2 type architecture, which helps in creating natural and expressive speech.
The model converts audio into tokens using the "Multi-Scale Neural Audio Codec (SNAC)" model, allowing it to understand and produce speech sounds.
Voicera aims to provide clear and understandable speech, focusing on natural pronunciation and intonation.
It's a project to explore TTS technology and improve audio output quality.
- **Developed by:** Lwasinam Dilli
- **Funded by :** Lwasinam Dilli
- **Model type:** GPT2-Transformer architecture
- **License:** Free and Open to use I guess :)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Github](https://github.com/Lwasinam/voicera)
- **Paper [optional]:** [More Information Needed]
- **Demo :** [Demos](https://lwasinam.github.io/)
## How to Get Started with the Model
There are three models, We have the base model and two other finetuned on jenny and expresso datasets
The best of all currently is the Jenny finetune
Here are colab link to all 3 respectively
1. [Base Model](https://colab.research.google.com/drive/10nPKliRs1C3ofv2J16_HGDlmzfd-yBtj#scrollTo=r17orAuZ45Q2)
2. [Jenny-Finetune](https://colab.research.google.com/drive/1MSzGGqIhGYVCn76alsX9oBzwC4EtOQSR#scrollTo=Oz0DG-MtovBw)
3. [Expresso-Finetune](https://colab.research.google.com/drive/1wzwSOtpT1CpEMvbcjvvgEKQZoQa5bX2p#scrollTo=YrBUwCNYmmUW&uniqifier=1)
## Training Details
### Training Data
Training data consist of clean subset of Hifi, Libri-Speech, Libri-TTs and Globe datasets
### Training Procedure
During training, audio tokens are generated from snac model and concatenated with text tokens, They are all trained in an autoregressive manner
but since we're interested in just audio tokens, text token loss is reduced by 0.1.
#### Preprocessing
Hugging Face had pretty much all the datasets I needed. I just had to filter out audio more than 10secs due to compute restraints
#### Training Hyperparameters
- Weight decay 0.1
- batch_size 1 with grad_accumulation of 32
- Scheduler : CosineAnnealingWarmRestart with minimum learning rate of 1e-7 and Num of steps for Warm Restart being 500
## Evaluation
I should probably work on this, the loss went down and the output got better :)
### Results
Check out the demo page her -> [Demo](https://lwasinam.github.io/)
#### Summary
- **Hardware Type:** Tesla P100
- **Hours used:** 300+hrs
- **Cloud Provider:** Kaggle :)
## Citation [optional]
**BibTeX:**
```
@software{Betker_TorToiSe_text-to-speech_2022,
author = {Betker, James},
month = apr,
title = {{TorToiSe text-to-speech}},
url = {https://github.com/neonbjb/tortoise-tts},
version = {2.0},
year = {2022}
}
@software{Siuzdak_SNAC_Multi-Scale_Neural_2024,
author = {Siuzdak, Hubert},
month = feb,
title = {{SNAC: Multi-Scale Neural Audio Codec}},
url = {https://github.com/hubertsiuzdak/snac},
year = {2024}
}
```
## Model Card Authors [optional]
Lwasinam Dilli |
TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ | TheBloke | "2023-12-10T09:09:22Z" | 22 | 19 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"custom_code",
"en",
"dataset:migtissera/Synthia-v1.3",
"dataset:meta-math/MetaMathQA",
"dataset:NousResearch/capybara",
"base_model:DiscoResearch/DiscoLM-mixtral-8x7b-v2",
"base_model:quantized:DiscoResearch/DiscoLM-mixtral-8x7b-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-12-09T19:04:00Z" | ---
Tags:
- mixtral
- moe
- discoresearch
base_model: DiscoResearch/DiscoLM-mixtral-8x7b-v2
datasets:
- migtissera/Synthia-v1.3
- meta-math/MetaMathQA
- NousResearch/capybara
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: Disco Research
model_name: Discolm Mixtral 8X7B v2
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Discolm Mixtral 8X7B v2 - GPTQ
- Model creator: [Disco Research](https://huggingface.co/DiscoResearch)
- Original model: [Discolm Mixtral 8X7B v2](https://huggingface.co/DiscoResearch/DiscoLM-mixtral-8x7b-v2)
# WARNING - I CAN'T GET THESE GPTQ QUANTS TO WORK
Unfortunately, after 10 hours quanting at not insignificant cost, they don't actually appear to work.
I will leave them up in case any solution presents itself soon. But for now, I get errors like this
```
File "/workspace/venv/pytorch2/lib/python3.10/site-packages/auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py", line 239, in forward
zeros = zeros.reshape(-1, 1, zeros.shape[1] * zeros.shape[2])
RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 1, 0] because the unspecified dimension size -1 can be any value and is ambiguous
File "/workspace/venv/pytorch2/lib/python3.10/site-packages/auto_gptq/nn_modules/qlinear/qlinear_cuda.py", line 245, in forward
zeros = zeros.reshape(self.scales.shape)
RuntimeError: shape '[32, 8]' is invalid for input of size 0
```
<!-- description start -->
# Description
This repo contains GPTQ model files for [Disco Research's Discolm Mixtral 8X7B v2](https://huggingface.co/DiscoResearch/DiscoLM-mixtral-8x7b-v2).
**Experimental model**
This is an experimental GPTQ of MistralAI's Mixtral 7B 8Expert.
This is a quantisation of an unofficial implementation of Mixtral 7B 8Experted, created and hosted by DiscoResearch at: [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert).
To use it requires:
* Latest Transformers, installed from Github:
```
pip3 install git+https://github.com/huggingface/transformers.git
```
* `trust_remote_code=True`
Note that I have not yet tested the model myself, I will update when I know VRAM requirements.
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ)
* [Disco Research's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/DiscoResearch/DiscoLM-mixtral-8x7b-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.97 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.00 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.00 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.98 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_true](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/gptq-3bit-128g-actorder_true) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.00 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-3bit-32g-actorder_true](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/gptq-3bit-32g-actorder_true) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.99 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit--1g-actorder_true](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/gptq-8bit--1g-actorder_true) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.96 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-128g-actorder_true](https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ/tree/gptq-3bit-128g-actorder_true) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.00 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `DiscoLM-mixtral-8x7b-v2-GPTQ`:
```shell
mkdir DiscoLM-mixtral-8x7b-v2-GPTQ
huggingface-cli download TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ --local-dir DiscoLM-mixtral-8x7b-v2-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir DiscoLM-mixtral-8x7b-v2-GPTQ
huggingface-cli download TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir DiscoLM-mixtral-8x7b-v2-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir DiscoLM-mixtral-8x7b-v2-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ --local-dir DiscoLM-mixtral-8x7b-v2-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
**NOTE** This likely doesn't work at the moment.
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `DiscoLM-mixtral-8x7b-v2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
**NOTE** This likely doesn't work at the moment.
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
**NOTE** I can't get this working yet.
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/DiscoLM-mixtral-8x7b-v2-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=True,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
These GPTQs are not yet working.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Disco Research's Discolm Mixtral 8X7B v2

*Eight french experts sitting at a table. There's lots of wind.*
# DiscoLM Mixtral 8x7b alpha
**DiscoLM Mixtral 8x7b alpha** is an experimental 8x7b MoE model based on [Mistral AI´s Mixtral 8x7b](https://twitter.com/MistralAI/status/1733150512395038967).
This model is based on experimental code converting the model weights to huggingface format and enabling Transformers-based inference.
It was then finetuned on the Synthia, MethaMathQA und Capybara datasets.
DiscoLM Mixtral 8x7b alpha is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was created by [Björn Plüster](https://huggingface.co/bjoernp) with lots of support from the community.
**Many thanks to [HessianAI](https://hessian.ai/) for providing the compute resources for this project and to the great people at [LAION](https://laion.ai) without whom this project would not have been possible!**
## Table of Contents
1. [Download](#download)
2. [Benchmarks](#benchmarks)
3. [Prompt Format](#prompt-format)
4. [Dataset](#datasets)
5. [Acknowledgements](#acknowledgements)
6. [Contact](#contact)
7. [About DiscoResearch](#about-discoresearch)
8. [Disclaimer](#disclaimer)
## Download
**Please note that you have to run the model with `trust_remote_code=True` until the new arch is merged into transformers!**
| Huggingface | GPTQ | GGUF | AWQ | *Base Model* |
|-------|-------|-------|-------|-------|
| [Link](https://huggingface.co/DiscoResearch/DiscoLM-Mixtral-8x7b) | tbc | tbc | tbc | tbc |
## Benchmarks
### Huggingface Leaderboard
This model is still an early Alpha with experimental code and we can't guarantee that there all values are correct.
The following are the scores from our own evaluation.
| Metric | Value |
|-----------------------|-------|
| ARC (25-shot) | 67.32 |
| HellaSwag (10-shot) | 86.25 |
| MMLU (5-shot) | 70.72 |
| TruthfulQA (0-shot) | 54.17 |
| Winogrande (5-shot) | 80.72 |
| GSM8k (5-shot) | 25.09 (bad score. no clue why)|
| **Avg.** | **64.05** |
We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.
### FastEval
tbc
### MTBench
tbc
## Prompt Format
**Please note that you have to run the model with `trust_remote_code=True` until the new arch is merged into transformers!**
This model follows the ChatML format:
```
<|im_start|>system
You are DiscoLM, a helpful assistant.
<|im_end|>
<|im_start|>user
Please tell me possible reasons to call a research collective "Disco Research"<|im_end|>
<|im_start|>assistant
```
This formatting is also available via a pre-defined Transformers chat template, which means that lists of messages can be formatted for you with the apply_chat_template() method:
```python
chat = [
{"role": "system", "content": "You are DiscoLM, a helpful assistant."},
{"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}
]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized and formatted conversation ready to pass to `model.generate()`.
Basic inference code:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("DiscoResearch/DiscoLM-mixtral-8x7b-v2", low_cpu_mem_usage=True, device_map="auto", trust_remote_code=True)
tok = AutoTokenizer.from_pretrained("DiscoResearch/DiscoLM-mixtral-8x7b-v2")
chat = [
{"role": "system", "content": "You are DiscoLM, a helpful assistant."},
{"role": "user", "content": "Please tell me possible reasons to call a research collective Disco Research"}
]
x = tokenizer.apply_chat_template(chat, tokenize=True, return_tensors="pt", add_generation_prompt=True).cuda()
x = model.generate(x, max_new_tokens=128).cpu()
print(tok.batch_decode(x))
```
## Datasets
The following datasets were used for training DiscoLM Mixtral 8x7b alpha:
* [Synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
* [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
* NousReseach Capybara (currently not public)
Many thanks for all dataset providers/curators!
## Contact
Best way to reach us is on our [Discord](https://discord.gg/S8W8B5nz3v).
## About DiscoResearch
DiscoResearch is an aspiring open research community. Disco should be a place where researchers from many communities can come together to combine their expertise and create innovative and groundbreaking LLMs. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!
## Acknowledgements
Many thanks first and foremost to [Mistral AI](https://huggingface.co/mistralai) for releasing another awesome model and their release strategy that is much fun for the whole community.
Additionally, many thanks in particular to [Dmytro Dzhulgakov](https://huggingface.co/dzhulgakov) who was the first one with a running [inference implementation](https://github.com/dzhulgakov/llama-mistral), [Vik](https://huggingface.co/vikhyatk) who spotted a critical bug in our first implementation (he actually read the paper!), [winglian](https://huggingface.co/winglian) for helpful advice and Axolotl which was used to finetune the model, [MigTissera](https://huggingface.co/migtissera), [MetaMath](https://huggingface.co/meta-math) and [NousResearch](https://huggingface.co/NousResearch) for their great datasets, and everyone who participated in this awesome speedrun on either our, the [Nous Research](https://huggingface.co/NousResearch) or one of the other Discords (please contact us if we forgot to mention you here!).
**DiscoLM Mixtral is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was created by [Björn Plüster](https://huggingface.co/bjoernp).
The model was trained with compute provided by [HessianAI](https://hessian.ai/); many thanks as well to [LAION](https://laion.ai) for their coordination and providing invaluable contacts + advice.**
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
This model should only be used for research purposes.
|
loremipsum3658/jur-v5-fsl-tuned-cla-assun | loremipsum3658 | "2022-11-01T19:27:51Z" | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-11-01T19:14:38Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1099 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1099,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huawei-noah/pangu-CodeCLM-full-300m | huawei-noah | "2024-10-25T11:19:41Z" | 11 | 1 | null | [
"pytorch",
"gpt2",
"python",
"code",
"dataset:huawei-noah/python_text2code",
"region:us"
] | null | "2024-09-05T12:56:29Z" | ---
datasets:
- huawei-noah/python_text2code
tags:
- python
- code
---
# Model Card for pangu-CodeCLM-full-300m
- **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
- **Paper:** https://aclanthology.org/2024.eacl-long.72.pdf
## Model Description
This model is a PanGu-Alpha model further trained on text-to-code pairs
collected from public github repositories.
Training was performed with the CodeCLM objective, i.e. causal language modeling calculating loss only over code tokens and full embedding separation (all tokens are asssigned different embeddings).
In order to use the model, first download it from the hub and have a look at the [evaluation section](https://github.com/huawei-noah/noah-research/blob/master/NLP/text2code_mrpt/README.md#evaluation).
## Citation [optional]
**BibTeX:**
```html
@inproceedings{christopoulou-etal-2024-text,
title = "Text-to-Code Generation with Modality-relative Pre-training",
author = "Christopoulou, Fenia and
Zhang, Guchun and
Lampouras, Gerasimos",
editor = "Graham, Yvette and
Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.72",
pages = "1194--1208",
abstract = "Large pre-trained language models have recently been expanded and applied to programming language tasks with great success, often through further pre-training of a strictly-natural language model{--}where training sequences typically contain both natural and (linearised) programming language. Such approaches effectively map both modalities of the sequence into the same embedding space. However, programming language keywords (e.g. {``}while{''}) often have very strictly defined semantics. As such, transfer learning from their natural language usage may not necessarily be beneficial to their code application and vise versa. Assuming an already pre-trained language model, in this work we investigate how sequence tokens can be adapted and represented differently, depending on which modality they belong to, and to the ultimate benefit of the downstream task. We experiment with separating embedding spaces between modalities during further model pre-training with modality-relative training objectives. We focus on text-to-code generation and observe consistent improvements across two backbone models and two test sets, measuring pass@$k$ and a novel incremental variation.",
}
```
## Model Card Authors [optional]
[Fenia Christopoulou](mailto:[email protected])
|
muciz/iseng | muciz | "2025-01-03T13:57:42Z" | 5 | 0 | null | [
"region:us"
] | null | "2024-03-29T16:03:40Z" | Just for personal testing purposes because download model from CIVITAi is so slow |
mradermacher/mistral-7b-arc_reasoning_v2-GGUF | mradermacher | "2025-01-20T15:40:43Z" | 221 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:bany1111/mistral-7b-arc_reasoning_v2",
"base_model:quantized:bany1111/mistral-7b-arc_reasoning_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-20T15:24:17Z" | ---
base_model: bany1111/mistral-7b-arc_reasoning_v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/bany1111/mistral-7b-arc_reasoning_v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-arc_reasoning_v2-GGUF/resolve/main/mistral-7b-arc_reasoning_v2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KingKazma/cnn_dailymail_6789_200000_100000_v1_train | KingKazma | "2023-08-21T15:09:56Z" | 4 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | "2023-08-21T15:09:54Z" |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# cnn_dailymail_6789_200000_100000_v1_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/cnn_dailymail_6789_200000_100000_v1_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 1082
* Number of training documents: 200000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - mr - people - police - year | 5 | -1_said_mr_people_police |
| 0 | league - goal - cup - player - club | 104194 | 0_league_goal_cup_player |
| 1 | murder - shooting - police - county - shot | 19115 | 1_murder_shooting_police_county |
| 2 | nfl - nba - quarterback - basketball - game | 3824 | 2_nfl_nba_quarterback_basketball |
| 3 | hospital - doctor - cancer - transplant - baby | 1993 | 3_hospital_doctor_cancer_transplant |
| 4 | murray - tennis - wimbledon - federer - djokovic | 1820 | 4_murray_tennis_wimbledon_federer |
| 5 | ship - boat - cruise - rescue - mountain | 1497 | 5_ship_boat_cruise_rescue |
| 6 | plane - flight - airline - passenger - airport | 1419 | 6_plane_flight_airline_passenger |
| 7 | romney - republican - republicans - democrats - obama | 1397 | 7_romney_republican_republicans_democrats |
| 8 | film - movie - comedy - actor - character | 1279 | 8_film_movie_comedy_actor |
| 9 | space - mars - nasa - planet - earth | 1244 | 9_space_mars_nasa_planet |
| 10 | iran - israeli - israel - palestinian - gaza | 1128 | 10_iran_israeli_israel_palestinian |
| 11 | war - soldier - army - afghanistan - medal | 1096 | 11_war_soldier_army_afghanistan |
| 12 | dog - cat - animal - pet - dogs | 998 | 12_dog_cat_animal_pet |
| 13 | mcilroy - golf - woods - ryder - pga | 979 | 13_mcilroy_golf_woods_ryder |
| 14 | ukraine - russia - putin - russian - ukrainian | 939 | 14_ukraine_russia_putin_russian |
| 15 | medal - gold - olympic - games - olympics | 817 | 15_medal_gold_olympic_games |
| 16 | korea - korean - north - kim - koreas | 784 | 16_korea_korean_north_kim |
| 17 | fashion - dress - collection - style - model | 764 | 17_fashion_dress_collection_style |
| 18 | driver - driving - car - road - crash | 740 | 18_driver_driving_car_road |
| 19 | somalia - alshabaab - sudan - kenya - kenyan | 739 | 19_somalia_alshabaab_sudan_kenya |
| 20 | hamilton - prix - rosberg - f1 - formula | 722 | 20_hamilton_prix_rosberg_f1 |
| 21 | shark - whale - dolphin - fish - sea | 705 | 21_shark_whale_dolphin_fish |
| 22 | property - price - house - estate - buyer | 642 | 22_property_price_house_estate |
| 23 | mayweather - fight - pacquiao - boxing - froch | 630 | 23_mayweather_fight_pacquiao_boxing |
| 24 | stabbed - murder - knife - heard - crown | 627 | 24_stabbed_murder_knife_heard |
| 25 | ebola - virus - liberia - leone - sierra | 609 | 25_ebola_virus_liberia_leone |
| 26 | car - vehicle - cars - electric - motor | 608 | 26_car_vehicle_cars_electric |
| 27 | teacher - school - sex - student - sexual | 584 | 27_teacher_school_sex_student |
| 28 | zoo - elephant - animal - rhino - snake | 578 | 28_zoo_elephant_animal_rhino |
| 29 | painting - art - artist - banksy - artwork | 560 | 29_painting_art_artist_banksy |
| 30 | mexican - cartel - mexico - mexicos - drug | 540 | 30_mexican_cartel_mexico_mexicos |
| 31 | prince - duchess - royal - queen - duke | 529 | 31_prince_duchess_royal_queen |
| 32 | nhs - patient - ae - trust - patients | 517 | 32_nhs_patient_ae_trust |
| 33 | pupil - education - school - ofsted - schools | 512 | 33_pupil_education_school_ofsted |
| 34 | snowden - nsa - intelligence - surveillance - snowdens | 492 | 34_snowden_nsa_intelligence_surveillance |
| 35 | chinese - bo - hong - china - kong | 464 | 35_chinese_bo_hong_china |
| 36 | mcdonalds - chocolate - pizza - food - burger | 462 | 36_mcdonalds_chocolate_pizza_food |
| 37 | album - song - music - band - beatles | 454 | 37_album_song_music_band |
| 38 | fire - blaze - firefighter - smoke - flame | 452 | 38_fire_blaze_firefighter_smoke |
| 39 | pope - vatican - francis - cardinal - benedict | 446 | 39_pope_vatican_francis_cardinal |
| 40 | labour - ukip - miliband - farage - tory | 431 | 40_labour_ukip_miliband_farage |
| 41 | iphone - apple - ipad - samsung - tablet | 430 | 41_iphone_apple_ipad_samsung |
| 42 | chavez - venezuela - venezuelan - maduro - farc | 428 | 42_chavez_venezuela_venezuelan_maduro |
| 43 | afghan - afghanistan - taliban - kabul - karzai | 397 | 43_afghan_afghanistan_taliban_kabul |
| 44 | ancient - archaeologist - tomb - roman - bc | 369 | 44_ancient_archaeologist_tomb_roman |
| 45 | africa - african - africas - continent - malawi | 338 | 45_africa_african_africas_continent |
| 46 | weather - rain - temperature - snow - flood | 329 | 46_weather_rain_temperature_snow |
| 47 | horse - jockey - racing - stakes - race | 329 | 47_horse_jockey_racing_stakes |
| 48 | syrian - syria - damascus - alassad - regime | 318 | 48_syrian_syria_damascus_alassad |
| 49 | novel - book - shades - author - fifty | 302 | 49_novel_book_shades_author |
| 50 | ferguson - wilson - brown - browns - louis | 295 | 50_ferguson_wilson_brown_browns |
| 51 | libya - libyan - gadhafi - tripoli - gadhafis | 292 | 51_libya_libyan_gadhafi_tripoli |
| 52 | weight - stone - diet - size - eating | 286 | 52_weight_stone_diet_size |
| 53 | facebook - user - app - facebooks - users | 281 | 53_facebook_user_app_facebooks |
| 54 | mubarak - egypt - egyptian - brotherhood - egypts | 279 | 54_mubarak_egypt_egyptian_brotherhood |
| 55 | sexual - sex - crown - indecent - girl | 273 | 55_sexual_sex_crown_indecent |
| 56 | fraud - money - jailed - crown - account | 266 | 56_fraud_money_jailed_crown |
| 57 | nazi - hitler - auschwitz - holocaust - jews | 265 | 57_nazi_hitler_auschwitz_holocaust |
| 58 | bank - rbs - banking - bonus - barclays | 255 | 58_bank_rbs_banking_bonus |
| 59 | tsarnaev - tamerlan - boston - dzhokhar - bombing | 253 | 59_tsarnaev_tamerlan_boston_dzhokhar |
| 60 | savile - clifford - paedophile - jimmy - abuse | 253 | 60_savile_clifford_paedophile_jimmy |
| 61 | greece - eurozone - greek - euro - bailout | 247 | 61_greece_eurozone_greek_euro |
| 62 | snow - storm - weather - inch - temperature | 244 | 62_snow_storm_weather_inch |
| 63 | delhi - india - rape - singh - indian | 241 | 63_delhi_india_rape_singh |
| 64 | cancer - patient - prostate - treatment - drug | 234 | 64_cancer_patient_prostate_treatment |
| 65 | haiti - portauprince - haitian - earthquake - haitis | 233 | 65_haiti_portauprince_haitian_earthquake |
| 66 | bbc - presenter - radio - programme - archers | 226 | 66_bbc_presenter_radio_programme |
| 67 | hacking - murdoch - coulson - leveson - brooks | 214 | 67_hacking_murdoch_coulson_leveson |
| 68 | cuba - cuban - castro - havana - fidel | 214 | 68_cuba_cuban_castro_havana |
| 69 | zimmerman - trayvon - zimmermans - martin - sanford | 203 | 69_zimmerman_trayvon_zimmermans_martin |
| 70 | samesex - marriage - gay - supreme - ruling | 202 | 70_samesex_marriage_gay_supreme |
| 71 | bp - oil - spill - gulf - deepwater | 201 | 71_bp_oil_spill_gulf |
| 72 | tobacco - smoking - cigarette - ecigarettes - smoker | 198 | 72_tobacco_smoking_cigarette_ecigarettes |
| 73 | mandela - mandelas - nelson - african - south | 198 | 73_mandela_mandelas_nelson_african |
| 74 | climate - emission - carbon - warming - global | 197 | 74_climate_emission_carbon_warming |
| 75 | turkish - turkey - erdogan - turkeys - istanbul | 196 | 75_turkish_turkey_erdogan_turkeys |
| 76 | console - xbox - gaming - playstation - game | 189 | 76_console_xbox_gaming_playstation |
| 77 | pakistani - pakistan - taliban - pakistans - militant | 184 | 77_pakistani_pakistan_taliban_pakistans |
| 78 | crash - driver - suv - car - truck | 177 | 78_crash_driver_suv_car |
| 79 | armstrong - tour - froome - doping - cavendish | 176 | 79_armstrong_tour_froome_doping |
| 80 | jackson - jacksons - aeg - murray - propofol | 175 | 80_jackson_jacksons_aeg_murray |
| 81 | tsa - airport - screening - security - screener | 173 | 81_tsa_airport_screening_security |
| 82 | pistorius - steenkamp - reeva - oscar - pretoria | 172 | 82_pistorius_steenkamp_reeva_oscar |
| 83 | scotland - salmond - scottish - independence - snp | 172 | 83_scotland_salmond_scottish_independence |
| 84 | fire - wildfire - blaze - firefighter - acre | 170 | 84_fire_wildfire_blaze_firefighter |
| 85 | robbery - thief - gang - cctv - jailed | 166 | 85_robbery_thief_gang_cctv |
| 86 | sex - dating - relationship - partner - men | 166 | 86_sex_dating_relationship_partner |
| 87 | gun - nra - newtown - background - firearm | 165 | 87_gun_nra_newtown_background |
| 88 | earthquake - quake - tsunami - magnitude - geological | 165 | 88_earthquake_quake_tsunami_magnitude |
| 89 | boko - haram - nigeria - nigerian - shekau | 163 | 89_boko_haram_nigeria_nigerian |
| 90 | volcano - lava - eruption - ash - volcanic | 162 | 90_volcano_lava_eruption_ash |
| 91 | glass - google - device - wearable - smartwatch | 161 | 91_glass_google_device_wearable |
| 92 | kennedy - kennedys - oswald - assassination - 1963 | 159 | 92_kennedy_kennedys_oswald_assassination |
| 93 | china - zhang - yue - chinas - chinese | 159 | 93_china_zhang_yue_chinas |
| 94 | pirate - ship - piracy - maersk - somalia | 158 | 94_pirate_ship_piracy_maersk |
| 95 | china - japanese - japan - chinese - japans | 153 | 95_china_japanese_japan_chinese |
| 96 | abbott - gillard - turnbull - minister - rudd | 152 | 96_abbott_gillard_turnbull_minister |
| 97 | hollande - sarkozy - trierweiler - french - francois | 148 | 97_hollande_sarkozy_trierweiler_french |
| 98 | sugar - calorie - diet - obesity - fat | 144 | 98_sugar_calorie_diet_obesity |
| 99 | reactor - fukushima - nuclear - plant - radiation | 141 | 99_reactor_fukushima_nuclear_plant |
| 100 | mugabe - zimbabwe - tsvangirai - mugabes - zimbabwes | 137 | 100_mugabe_zimbabwe_tsvangirai_mugabes |
| 101 | tornado - storm - oklahoma - twister - weather | 137 | 101_tornado_storm_oklahoma_twister |
| 102 | ira - belfast - ireland - sinn - fein | 136 | 102_ira_belfast_ireland_sinn |
| 103 | sony - korea - pascal - north - korean | 133 | 103_sony_korea_pascal_north |
| 104 | password - computer - hacker - malware - cyber | 133 | 104_password_computer_hacker_malware |
| 105 | berlusconi - berlusconis - silvio - italian - bunga | 132 | 105_berlusconi_berlusconis_silvio_italian |
| 106 | eu - brussels - cameron - referendum - european | 130 | 106_eu_brussels_cameron_referendum |
| 107 | bali - sukumaran - chan - indonesian - sandiford | 130 | 107_bali_sukumaran_chan_indonesian |
| 108 | fraternity - phi - campus - kappa - hazing | 127 | 108_fraternity_phi_campus_kappa |
| 109 | malaysia - malaysian - mh370 - search - flight | 125 | 109_malaysia_malaysian_mh370_search |
| 110 | sharrouf - australian - islamic - sydney - syria | 125 | 110_sharrouf_australian_islamic_sydney |
| 111 | yemen - yemeni - sanaa - saleh - yemens | 125 | 111_yemen_yemeni_sanaa_saleh |
| 112 | sandusky - penn - paterno - sanduskys - jerry | 123 | 112_sandusky_penn_paterno_sanduskys |
| 113 | knox - sollecito - kercher - knoxs - meredith | 122 | 113_knox_sollecito_kercher_knoxs |
| 114 | baghdad - iraqi - iraq - shiite - sunni | 121 | 114_baghdad_iraqi_iraq_shiite |
| 115 | hernandez - hernandezs - lloyd - odin - patriots | 121 | 115_hernandez_hernandezs_lloyd_odin |
| 116 | spider - bee - insect - ant - butterfly | 121 | 116_spider_bee_insect_ant |
| 117 | transcript - student - todays - roll - news | 120 | 117_transcript_student_todays_roll |
| 118 | bear - polar - cub - bears - wildlife | 118 | 118_bear_polar_cub_bears |
| 119 | rail - hs2 - fare - train - transport | 117 | 119_rail_hs2_fare_train |
| 120 | bangkok - thaksin - thai - yingluck - thailand | 117 | 120_bangkok_thaksin_thai_yingluck |
| 121 | lottery - jackpot - powerball - ticket - winning | 115 | 121_lottery_jackpot_powerball_ticket |
| 122 | island - beach - resort - hotel - spa | 113 | 122_island_beach_resort_hotel |
| 123 | hiv - aids - virus - hivaids - infection | 111 | 123_hiv_aids_virus_hivaids |
| 124 | secret - gonzalez - agent - service - fence | 109 | 124_secret_gonzalez_agent_service |
| 125 | rio - janeiro - brazilian - brazil - paulo | 108 | 125_rio_janeiro_brazilian_brazil |
| 126 | marijuana - pot - legalization - cannabis - colorado | 108 | 126_marijuana_pot_legalization_cannabis |
| 127 | bird - pigeon - birds - owl - nest | 107 | 127_bird_pigeon_birds_owl |
| 128 | hurricane - storm - tropical - mph - bermuda | 106 | 128_hurricane_storm_tropical_mph |
| 129 | hebdo - charlie - merah - coulibaly - kouachi | 105 | 129_hebdo_charlie_merah_coulibaly |
| 130 | lohan - lindsay - lohans - probation - sautner | 104 | 130_lohan_lindsay_lohans_probation |
| 131 | saudi - arabia - abdullah - arabias - riyadh | 103 | 131_saudi_arabia_abdullah_arabias |
| 132 | transcript - curriculum - todays - feedback - click | 103 | 132_transcript_curriculum_todays_feedback |
| 133 | energy - gas - price - ofgem - bill | 100 | 133_energy_gas_price_ofgem |
| 134 | hasan - hood - fort - hasans - nidal | 98 | 134_hasan_hood_fort_hasans |
| 135 | wine - beer - bottle - champagne - alcohol | 97 | 135_wine_beer_bottle_champagne |
| 136 | eu - migrant - immigration - migration - romanians | 96 | 136_eu_migrant_immigration_migration |
| 137 | irs - lerner - taxexempt - issa - koskinen | 96 | 137_irs_lerner_taxexempt_issa |
| 138 | bbc - patten - bbcs - corporation - lord | 95 | 138_bbc_patten_bbcs_corporation |
| 139 | obama - michelle - malia - obamas - lady | 95 | 139_obama_michelle_malia_obamas |
| 140 | madeleine - mccann - praia - luz - portuguese | 94 | 140_madeleine_mccann_praia_luz |
| 141 | evans - ched - sheffield - massey - oldham | 94 | 141_evans_ched_sheffield_massey |
| 142 | salmonella - outbreak - coli - listeria - food | 93 | 142_salmonella_outbreak_coli_listeria |
| 143 | falklands - falkland - islands - argentina - argentine | 91 | 143_falklands_falkland_islands_argentina |
| 144 | typhoon - philippines - tacloban - haiyan - manila | 90 | 144_typhoon_philippines_tacloban_haiyan |
| 145 | calais - migrant - port - bouchart - lorry | 89 | 145_calais_migrant_port_bouchart |
| 146 | olympic - torch - olympics - games - ceremony | 89 | 146_olympic_torch_olympics_games |
| 147 | immigration - immigrant - undocumented - reform - republicans | 89 | 147_immigration_immigrant_undocumented_reform |
| 148 | monis - siege - cafe - lindt - hostage | 89 | 148_monis_siege_cafe_lindt |
| 149 | cosby - cosbys - drugged - comedian - constand | 88 | 149_cosby_cosbys_drugged_comedian |
| 150 | motogp - lorenzo - rossi - pedrosa - marquez | 87 | 150_motogp_lorenzo_rossi_pedrosa |
| 151 | modi - indias - india - delhi - gandhi | 87 | 151_modi_indias_india_delhi |
| 152 | mansion - property - apartment - estate - leibovitz | 86 | 152_mansion_property_apartment_estate |
| 153 | shopper - shopping - retailer - retail - sale | 85 | 153_shopper_shopping_retailer_retail |
| 154 | tesco - aldi - supermarket - waitrose - sainsburys | 85 | 154_tesco_aldi_supermarket_waitrose |
| 155 | nascar - busch - stewart - ward - race | 85 | 155_nascar_busch_stewart_ward |
| 156 | sri - tamil - lankan - lanka - rajapaksa | 84 | 156_sri_tamil_lankan_lanka |
| 157 | drought - water - spill - reservoir - river | 84 | 157_drought_water_spill_reservoir |
| 158 | lanza - newtown - hook - sandy - elementary | 83 | 158_lanza_newtown_hook_sandy |
| 159 | va - veterans - shinseki - veteran - phoenix | 83 | 159_va_veterans_shinseki_veteran |
| 160 | dinosaur - fossil - skeleton - specie - specimen | 82 | 160_dinosaur_fossil_skeleton_specie |
| 161 | adoption - peaches - davion - geldof - adopted | 82 | 161_adoption_peaches_davion_geldof |
| 162 | kanye - kardashian - kim - kris - wedding | 80 | 162_kanye_kardashian_kim_kris |
| 163 | panda - tian - zoo - cub - pandas | 79 | 163_panda_tian_zoo_cub |
| 164 | cannabis - plant - marijuana - drug - factory | 79 | 164_cannabis_plant_marijuana_drug |
| 165 | alcohol - drinking - drink - liver - binge | 78 | 165_alcohol_drinking_drink_liver |
| 166 | sexual - sinclair - assault - military - lackland | 78 | 166_sexual_sinclair_assault_military |
| 167 | occupy - protester - zuccotti - demonstrator - wall | 77 | 167_occupy_protester_zuccotti_demonstrator |
| 168 | name - names - baby - messiah - naming | 77 | 168_name_names_baby_messiah |
| 169 | rihanna - brown - drake - probation - browns | 77 | 169_rihanna_brown_drake_probation |
| 170 | strausskahn - diallo - dominique - strausskahns - imf | 76 | 170_strausskahn_diallo_dominique_strausskahns |
| 171 | letizia - felipe - spanish - king - queen | 76 | 171_letizia_felipe_spanish_king |
| 172 | toyota - gm - recall - ignition - nhtsa | 76 | 172_toyota_gm_recall_ignition |
| 173 | train - railroad - metronorth - derailment - derailed | 76 | 173_train_railroad_metronorth_derailment |
| 174 | bergdahl - bergdahls - bowe - taliban - army | 76 | 174_bergdahl_bergdahls_bowe_taliban |
| 175 | tattoo - tattooed - tattoos - ink - tattooing | 75 | 175_tattoo_tattooed_tattoos_ink |
| 176 | veil - niqab - muslim - burka - ban | 74 | 176_veil_niqab_muslim_burka |
| 177 | flu - h1n1 - swine - vaccine - virus | 74 | 177_flu_h1n1_swine_vaccine |
| 178 | measles - vaccine - mmr - vaccinated - vaccination | 74 | 178_measles_vaccine_mmr_vaccinated |
| 179 | tax - osborne - chancellor - income - labour | 74 | 179_tax_osborne_chancellor_income |
| 180 | anthony - caylee - anthonys - casey - baez | 74 | 180_anthony_caylee_anthonys_casey |
| 181 | gay - sochi - russian - russia - propaganda | 74 | 181_gay_sochi_russian_russia |
| 182 | economy - growth - osborne - recession - chancellor | 73 | 182_economy_growth_osborne_recession |
| 183 | philippines - philippine - ampatuan - aquino - maguindanao | 73 | 183_philippines_philippine_ampatuan_aquino |
| 184 | weiner - spitzer - weiners - abedin - huma | 72 | 184_weiner_spitzer_weiners_abedin |
| 185 | bosnian - mladic - serb - srebrenica - yugoslavia | 71 | 185_bosnian_mladic_serb_srebrenica |
| 186 | mh17 - malaysia - ukraine - airlines - crash | 71 | 186_mh17_malaysia_ukraine_airlines |
| 187 | wealth - richest - gates - billion - billionaire | 70 | 187_wealth_richest_gates_billion |
| 188 | eta - basque - spanish - spain - spains | 70 | 188_eta_basque_spanish_spain |
| 189 | wars - star - abrams - vii - film | 70 | 189_wars_star_abrams_vii |
| 190 | sleep - brain - neuron - stimulation - study | 69 | 190_sleep_brain_neuron_stimulation |
| 191 | lottery - euromillions - jackpot - bayford - ticket | 69 | 191_lottery_euromillions_jackpot_bayford |
| 192 | miner - mine - coal - miners - mining | 68 | 192_miner_mine_coal_miners |
| 193 | ivf - fertility - embryo - womb - egg | 68 | 193_ivf_fertility_embryo_womb |
| 194 | 911 - memorial - museum - trade - towers | 68 | 194_911_memorial_museum_trade |
| 195 | border - unaccompanied - immigrant - patrol - immigration | 68 | 195_border_unaccompanied_immigrant_patrol |
| 196 | ford - toronto - mayor - rob - crack | 67 | 196_ford_toronto_mayor_rob |
| 197 | neanderthals - neanderthal - homo - modern - ancestor | 67 | 197_neanderthals_neanderthal_homo_modern |
| 198 | jeffs - flds - sect - ranch - polygamy | 66 | 198_jeffs_flds_sect_ranch |
| 199 | sandy - storm - superstorm - hurricane - fema | 66 | 199_sandy_storm_superstorm_hurricane |
| 200 | obesity - obese - overweight - bmi - weight | 66 | 200_obesity_obese_overweight_bmi |
| 201 | dewani - anni - shrien - dewanis - tongo | 66 | 201_dewani_anni_shrien_dewanis |
| 202 | robot - robots - robotics - pepper - humanoid | 65 | 202_robot_robots_robotics_pepper |
| 203 | cia - interrogation - torture - cheney - intelligence | 64 | 203_cia_interrogation_torture_cheney |
| 204 | euthanasia - assisted - suicide - terminally - die | 64 | 204_euthanasia_assisted_suicide_terminally |
| 205 | seeker - asylum - detention - refugee - manus | 64 | 205_seeker_asylum_detention_refugee |
| 206 | assange - wikileaks - embassy - ecuadorian - assanges | 64 | 206_assange_wikileaks_embassy_ecuadorian |
| 207 | fracking - shale - gas - drilling - balcombe | 63 | 207_fracking_shale_gas_drilling |
| 208 | ferry - sewol - ship - korean - sinking | 63 | 208_ferry_sewol_ship_korean |
| 209 | tree - christmas - fir - festive - decoration | 63 | 209_tree_christmas_fir_festive |
| 210 | wave - surfer - surfing - surf - swell | 62 | 210_wave_surfer_surfing_surf |
| 211 | deportation - deported - foreign - criminal - deport | 62 | 211_deportation_deported_foreign_criminal |
| 212 | mosquito - dengue - virus - nile - chikungunya | 62 | 212_mosquito_dengue_virus_nile |
| 213 | sloot - der - flores - holloway - van | 62 | 213_sloot_der_flores_holloway |
| 214 | boston - bauman - marathon - leg - celeste | 62 | 214_boston_bauman_marathon_leg |
| 215 | sotomayor - ginsburg - supreme - sotomayors - justice | 62 | 215_sotomayor_ginsburg_supreme_sotomayors |
| 216 | bulger - bulgers - flemmi - whitey - greig | 61 | 216_bulger_bulgers_flemmi_whitey |
| 217 | castro - dejesus - knight - ariel - berry | 61 | 217_castro_dejesus_knight_ariel |
| 218 | parking - warden - council - ticket - fine | 61 | 218_parking_warden_council_ticket |
| 219 | mafia - rancadore - ndrangheta - italian - italy | 61 | 219_mafia_rancadore_ndrangheta_italian |
| 220 | aid - dfid - 07 - spending - development | 60 | 220_aid_dfid_07_spending |
| 221 | tip - paler - buzi - waitress - server | 60 | 221_tip_paler_buzi_waitress |
| 222 | drug - ecstasy - methadone - drugs - mdma | 60 | 222_drug_ecstasy_methadone_drugs |
| 223 | migrant - lampedusa - mediterranean - boat - italian | 60 | 223_migrant_lampedusa_mediterranean_boat |
| 224 | suu - kyi - myanmar - aung - burma | 59 | 224_suu_kyi_myanmar_aung |
| 225 | bhutto - musharraf - pakistan - pakistans - benazir | 59 | 225_bhutto_musharraf_pakistan_pakistans |
| 226 | hindu - diwali - india - festival - delhi | 59 | 226_hindu_diwali_india_festival |
| 227 | pollution - smog - air - beijing - ozone | 59 | 227_pollution_smog_air_beijing |
| 228 | submarine - hms - ship - navy - hunley | 59 | 228_submarine_hms_ship_navy |
| 229 | wedding - bride - betar - groom - couple | 58 | 229_wedding_bride_betar_groom |
| 230 | drone - unmanned - drones - faa - aircraft | 58 | 230_drone_unmanned_drones_faa |
| 231 | childcare - benefit - income - child - tax | 58 | 231_childcare_benefit_income_child |
| 232 | saatchi - nigella - lawson - grillo - lawsons | 58 | 232_saatchi_nigella_lawson_grillo |
| 233 | abortion - clinic - parenthood - texas - antiabortion | 57 | 233_abortion_clinic_parenthood_texas |
| 234 | zara - tindall - phillips - eventing - equestrian | 57 | 234_zara_tindall_phillips_eventing |
| 235 | hockey - nhl - kings - vancouver - bruins | 57 | 235_hockey_nhl_kings_vancouver |
| 236 | balloon - heene - trappe - falcon - mayumi | 56 | 236_balloon_heene_trappe_falcon |
| 237 | ice - antarctic - greenland - glacier - sheet | 56 | 237_ice_antarctic_greenland_glacier |
| 238 | yacht - superyacht - superyachts - vessel - boat | 56 | 238_yacht_superyacht_superyachts_vessel |
| 239 | harris - rolf - bindi - indecent - alwen | 56 | 239_harris_rolf_bindi_indecent |
| 240 | archbishop - church - welby - bishop - marriage | 55 | 240_archbishop_church_welby_bishop |
| 241 | moussaoui - ghaith - zazi - qaeda - al | 55 | 241_moussaoui_ghaith_zazi_qaeda |
| 242 | woman - gap - women - gender - sandberg | 55 | 242_woman_gap_women_gender |
| 243 | isis - syria - jihadi - henning - islamic | 54 | 243_isis_syria_jihadi_henning |
| 244 | etan - patz - aron - graham - hernandez | 54 | 244_etan_patz_aron_graham |
| 245 | adoption - russian - shatto - adopted - adoptive | 54 | 245_adoption_russian_shatto_adopted |
| 246 | sunscreen - skin - tanning - sunbeds - melanoma | 54 | 246_sunscreen_skin_tanning_sunbeds |
| 247 | newsnight - savile - bbc - mcalpine - entwistle | 54 | 247_newsnight_savile_bbc_mcalpine |
| 248 | sherlock - thrones - cumberbatch - benedict - conan | 53 | 248_sherlock_thrones_cumberbatch_benedict |
| 249 | hillsborough - 96 - disaster - liverpool - 1989 | 52 | 249_hillsborough_96_disaster_liverpool |
| 250 | poppy - legion - ceramic - tower - poppies | 52 | 250_poppy_legion_ceramic_tower |
| 251 | mitchell - plebgate - plebs - rennard - downing | 52 | 251_mitchell_plebgate_plebs_rennard |
| 252 | province - lorry - china - li - car | 52 | 252_province_lorry_china_li |
| 253 | foxconn - factory - apple - worker - starnes | 52 | 253_foxconn_factory_apple_worker |
| 254 | porn - pornography - internet - filter - google | 51 | 254_porn_pornography_internet_filter |
| 255 | holmes - theater - aurora - colorado - shooting | 51 | 255_holmes_theater_aurora_colorado |
| 256 | king - luther - civil - selma - kings | 50 | 256_king_luther_civil_selma |
| 257 | edwards - hunter - rielle - cate - quinn | 50 | 257_edwards_hunter_rielle_cate |
| 258 | boles - planning - housing - countryside - development | 50 | 258_boles_planning_housing_countryside |
| 259 | bull - gored - pamplona - bullfighting - bullfight | 50 | 259_bull_gored_pamplona_bullfighting |
| 260 | uber - lyft - taxi - driver - ubers | 50 | 260_uber_lyft_taxi_driver |
| 261 | music - spotify - beats - itunes - streaming | 49 | 261_music_spotify_beats_itunes |
| 262 | scouts - scout - scouting - bsa - gay | 49 | 262_scouts_scout_scouting_bsa |
| 263 | epstein - epsteins - roberts - andrew - prince | 48 | 263_epstein_epsteins_roberts_andrew |
| 264 | missing - nida - disappearance - search - corfe | 47 | 264_missing_nida_disappearance_search |
| 265 | giffords - loughner - tucson - gabrielle - arizona | 47 | 265_giffords_loughner_tucson_gabrielle |
| 266 | alhilli - saad - maillaud - annecy - mollier | 47 | 266_alhilli_saad_maillaud_annecy |
| 267 | isis - syria - iraq - islamic - fighter | 47 | 267_isis_syria_iraq_islamic |
| 268 | dubai - mme - uae - defterios - sheikha | 46 | 268_dubai_mme_uae_defterios |
| 269 | destination - hotel - city - top - ranked | 46 | 269_destination_hotel_city_top |
| 270 | sochi - olympic - olympics - games - ioc | 46 | 270_sochi_olympic_olympics_games |
| 271 | hair - beard - moustache - mustache - facial | 46 | 271_hair_beard_moustache_mustache |
| 272 | driving - drinkdriving - alcohol - breath - limit | 45 | 272_driving_drinkdriving_alcohol_breath |
| 273 | frog - specie - amphibian - frogs - salamander | 45 | 273_frog_specie_amphibian_frogs |
| 274 | blasio - mayor - koch - de - bloomberg | 45 | 274_blasio_mayor_koch_de |
| 275 | adebolajo - rigby - adebowale - woolwich - drummer | 45 | 275_adebolajo_rigby_adebowale_woolwich |
| 276 | abdulmutallab - explosive - yemen - farouk - detonate | 45 | 276_abdulmutallab_explosive_yemen_farouk |
| 277 | petraeus - kelley - broadwell - paula - kelleys | 45 | 277_petraeus_kelley_broadwell_paula |
| 278 | antibiotic - bacteria - infection - mrsa - antibiotics | 45 | 278_antibiotic_bacteria_infection_mrsa |
| 279 | 3d - printer - printing - printed - print | 45 | 279_3d_printer_printing_printed |
| 280 | sham - marriage - immigration - bride - wedding | 45 | 280_sham_marriage_immigration_bride |
| 281 | benghazi - clinton - cia - attack - libya | 44 | 281_benghazi_clinton_cia_attack |
| 282 | canal - treasure - ship - venice - laquila | 44 | 282_canal_treasure_ship_venice |
| 283 | postal - mail - stamp - delivery - royal | 44 | 283_postal_mail_stamp_delivery |
| 284 | allergy - pollen - allergic - peanut - allergies | 44 | 284_allergy_pollen_allergic_peanut |
| 285 | xinhua - earthquake - sichuan - quake - province | 44 | 285_xinhua_earthquake_sichuan_quake |
| 286 | khmer - rouge - cambodia - cambodian - phnom | 44 | 286_khmer_rouge_cambodia_cambodian |
| 287 | thatcher - thatchers - funeral - margaret - baroness | 44 | 287_thatcher_thatchers_funeral_margaret |
| 288 | iii - richard - leicester - king - iiis | 43 | 288_iii_richard_leicester_king |
| 289 | mosque - muslims - muslim - islamic - islam | 43 | 289_mosque_muslims_muslim_islamic |
| 290 | airbus - 447 - bea - france - air | 43 | 290_airbus_447_bea_france |
| 291 | jobs - apple - steve - cook - apples | 43 | 291_jobs_apple_steve_cook |
| 292 | pipeline - keystone - xl - oil - transcanada | 43 | 292_pipeline_keystone_xl_oil |
| 293 | benefit - welfare - duncan - pensions - claimant | 43 | 293_benefit_welfare_duncan_pensions |
| 294 | malala - malalas - taliban - pakistan - education | 43 | 294_malala_malalas_taliban_pakistan |
| 295 | ufo - object - alien - ufos - sighting | 43 | 295_ufo_object_alien_ufos |
| 296 | goto - yukawa - dhaka - bangladesh - japanese | 43 | 296_goto_yukawa_dhaka_bangladesh |
| 297 | black - obama - romney - racial - cain | 42 | 297_black_obama_romney_racial |
| 298 | texting - driving - phone - driver - distracted | 42 | 298_texting_driving_phone_driver |
| 299 | manning - mannings - wikileaks - coombs - lamo | 41 | 299_manning_mannings_wikileaks_coombs |
| 300 | factory - garment - bangladesh - dhaka - rana | 41 | 300_factory_garment_bangladesh_dhaka |
| 301 | bank - robbery - teller - robber - holmes | 41 | 301_bank_robbery_teller_robber |
| 302 | tibetan - tibet - dalai - tibetans - lama | 41 | 302_tibetan_tibet_dalai_tibetans |
| 303 | census - hispanic - population - latino - hispanics | 41 | 303_census_hispanic_population_latino |
| 304 | routh - kyle - littlefield - kyles - rouths | 41 | 304_routh_kyle_littlefield_kyles |
| 305 | rodas - walker - porsche - walkers - gt | 41 | 305_rodas_walker_porsche_walkers |
| 306 | happiness - wellbeing - oecd - coin - index | 41 | 306_happiness_wellbeing_oecd_coin |
| 307 | iraq - isis - troop - iraqi - combat | 41 | 307_iraq_isis_troop_iraqi |
| 308 | tower - skyscraper - building - tallest - burj | 41 | 308_tower_skyscraper_building_tallest |
| 309 | saldanha - greig - jacintha - prank - saldanhas | 41 | 309_saldanha_greig_jacintha_prank |
| 310 | madoff - madoffs - ponzi - bernie - dipascali | 40 | 310_madoff_madoffs_ponzi_bernie |
| 311 | bales - gibbs - morlock - afghan - winfield | 40 | 311_bales_gibbs_morlock_afghan |
| 312 | chemical - syria - syrian - weapon - syrias | 40 | 312_chemical_syria_syrian_weapon |
| 313 | toy - bionic - magnet - buckyballs - exoskeleton | 40 | 313_toy_bionic_magnet_buckyballs |
| 314 | china - xi - chinese - chinas - beijing | 40 | 314_china_xi_chinese_chinas |
| 315 | 4g - att - verizon - network - wireless | 40 | 315_4g_att_verizon_network |
| 316 | garrido - dugard - garridos - jaycee - dugards | 40 | 316_garrido_dugard_garridos_jaycee |
| 317 | christie - wildstein - christies - jersey - governor | 40 | 317_christie_wildstein_christies_jersey |
| 318 | cowell - factor - talent - simon - audition | 40 | 318_cowell_factor_talent_simon |
| 319 | bieber - biebers - justin - singer - miami | 40 | 319_bieber_biebers_justin_singer |
| 320 | pageant - miss - universe - contestant - beauty | 40 | 320_pageant_miss_universe_contestant |
| 321 | cyclone - kashmir - flooding - srinagar - andhra | 40 | 321_cyclone_kashmir_flooding_srinagar |
| 322 | bus - crash - highway - accident - driver | 39 | 322_bus_crash_highway_accident |
| 323 | breastfeeding - breastfeed - feeding - baby - kaleena | 39 | 323_breastfeeding_breastfeed_feeding_baby |
| 324 | repeal - gay - military - lesbian - openly | 39 | 324_repeal_gay_military_lesbian |
| 325 | fgm - mutilation - genital - dharmasena - female | 39 | 325_fgm_mutilation_genital_dharmasena |
| 326 | card - breach - credit - debit - data | 39 | 326_card_breach_credit_debit |
| 327 | ring - engagement - tovin - wedding - diamond | 39 | 327_ring_engagement_tovin_wedding |
| 328 | marathon - boston - runner - race - runners | 38 | 328_marathon_boston_runner_race |
| 329 | russian - airspace - raf - bomber - aircraft | 38 | 329_russian_airspace_raf_bomber |
| 330 | ghost - haunted - ghostly - paranormal - spooky | 38 | 330_ghost_haunted_ghostly_paranormal |
| 331 | tsunami - japan - sendai - earthquake - fukushima | 38 | 331_tsunami_japan_sendai_earthquake |
| 332 | ecclestone - gribkowsky - bernie - ecclestones - formula | 38 | 332_ecclestone_gribkowsky_bernie_ecclestones |
| 333 | turbine - wind - farm - energy - onshore | 38 | 333_turbine_wind_farm_energy |
| 334 | hazing - famu - band - champion - marching | 38 | 334_hazing_famu_band_champion |
| 335 | fertilizer - explosion - plant - ammonium - nitrate | 38 | 335_fertilizer_explosion_plant_ammonium |
| 336 | selfie - selfies - cornellier - photo - dictionaries | 38 | 336_selfie_selfies_cornellier_photo |
| 337 | manson - tate - atkins - parole - mansons | 38 | 337_manson_tate_atkins_parole |
| 338 | mushroom - ash - dieback - tree - fungus | 38 | 338_mushroom_ash_dieback_tree |
| 339 | petrol - litre - price - fuel - diesel | 37 | 339_petrol_litre_price_fuel |
| 340 | mortgage - rate - carney - inflation - bank | 37 | 340_mortgage_rate_carney_inflation |
| 341 | simpson - oj - goldman - simpsons - nicole | 37 | 341_simpson_oj_goldman_simpsons |
| 342 | bali - indonesia - jakarta - indonesian - jemaah | 37 | 342_bali_indonesia_jakarta_indonesian |
| 343 | capaldi - doctor - clara - episode - moffat | 36 | 343_capaldi_doctor_clara_episode |
| 344 | breivik - utoya - oslo - norway - breiviks | 36 | 344_breivik_utoya_oslo_norway |
| 345 | koppenhaver - mack - wwe - wrestling - wrestler | 36 | 345_koppenhaver_mack_wwe_wrestling |
| 346 | gascoigne - gazza - gascoignes - rehab - poole | 36 | 346_gascoigne_gazza_gascoignes_rehab |
| 347 | film - fu - kung - lee - hong | 36 | 347_film_fu_kung_lee |
| 348 | strasbourg - rights - echr - human - grayling | 36 | 348_strasbourg_rights_echr_human |
| 349 | pakistan - flood - flooding - sindh - relief | 36 | 349_pakistan_flood_flooding_sindh |
| 350 | meat - beef - horsemeat - horse - food | 36 | 350_meat_beef_horsemeat_horse |
| 351 | poker - ivey - casino - crockfords - card | 36 | 351_poker_ivey_casino_crockfords |
| 352 | mh17 - ukraine - ukrainian - missile - buk | 36 | 352_mh17_ukraine_ukrainian_missile |
| 353 | sanitation - trafigura - water - toilet - ewaste | 36 | 353_sanitation_trafigura_water_toilet |
| 354 | lightning - queensland - meteorology - storm - cyclone | 36 | 354_lightning_queensland_meteorology_storm |
| 355 | mumbai - kasab - india - taj - indian | 35 | 355_mumbai_kasab_india_taj |
| 356 | santos - samudio - bruno - samudios - souza | 35 | 356_santos_samudio_bruno_samudios |
| 357 | airline - faa - electronic - device - flight | 35 | 357_airline_faa_electronic_device |
| 358 | bahrain - bahrains - bahraini - rajab - saudi | 35 | 358_bahrain_bahrains_bahraini_rajab |
| 359 | arizona - immigration - arizonas - law - brewer | 35 | 359_arizona_immigration_arizonas_law |
| 360 | fox - cat - hog - wolf - animal | 35 | 360_fox_cat_hog_wolf |
| 361 | agencia - brasil - rio - janeiro - teresopolis | 35 | 361_agencia_brasil_rio_janeiro |
| 362 | kasem - kerri - kasems - jean - casey | 35 | 362_kasem_kerri_kasems_jean |
| 363 | osullivan - snooker - trump - ronnie - frame | 35 | 363_osullivan_snooker_trump_ronnie |
| 364 | frein - trooper - bivens - pennsylvania - dickson | 35 | 364_frein_trooper_bivens_pennsylvania |
| 365 | harry - prince - apache - afghanistan - helicopter | 35 | 365_harry_prince_apache_afghanistan |
| 366 | filin - bolshoi - ballet - dmitrichenko - dancer | 35 | 366_filin_bolshoi_ballet_dmitrichenko |
| 367 | ricin - dutschke - curtis - letter - bloomberg | 34 | 367_ricin_dutschke_curtis_letter |
| 368 | chechen - caucasus - dagestan - umarov - chechnya | 34 | 368_chechen_caucasus_dagestan_umarov |
| 369 | canadian - ottawa - vickers - zehafbibeau - parliament | 34 | 369_canadian_ottawa_vickers_zehafbibeau |
| 370 | gibraltar - spanish - spain - gibraltars - picardo | 34 | 370_gibraltar_spanish_spain_gibraltars |
| 371 | marriage - gay - samesex - tory - partnership | 34 | 371_marriage_gay_samesex_tory |
| 372 | china - chinas - economy - growth - chinese | 34 | 372_china_chinas_economy_growth |
| 373 | bikers - lien - mieses - biker - suv | 34 | 373_bikers_lien_mieses_biker |
| 374 | enterovirus - evd68 - d68 - virus - respiratory | 34 | 374_enterovirus_evd68_d68_virus |
| 375 | sikh - sikhs - singh - temple - kaleka | 34 | 375_sikh_sikhs_singh_temple |
| 376 | education - rhee - teacher - teachers - schools | 34 | 376_education_rhee_teacher_teachers |
| 377 | romanian - romanians - romania - roma - arch | 34 | 377_romanian_romanians_romania_roma |
| 378 | cannabis - marijuana - synthetic - k2 - drug | 34 | 378_cannabis_marijuana_synthetic_k2 |
| 379 | witheridge - thai - koh - tao - zaw | 33 | 379_witheridge_thai_koh_tao |
| 380 | chickfila - gay - cathy - therapy - rekers | 33 | 380_chickfila_gay_cathy_therapy |
| 381 | compounding - meningitis - fungal - necc - steroid | 33 | 381_compounding_meningitis_fungal_necc |
| 382 | marathon - runner - badwater - scotlandwilliams - baluchi | 33 | 382_marathon_runner_badwater_scotlandwilliams |
| 383 | rohingya - myanmar - rakhine - buddhists - buddhist | 33 | 383_rohingya_myanmar_rakhine_buddhists |
| 384 | boo - mcdaniel - mama - alana - honey | 32 | 384_boo_mcdaniel_mama_alana |
| 385 | lusty - taubman - lawsuit - rogers - hotton | 32 | 385_lusty_taubman_lawsuit_rogers |
| 386 | tax - hmrc - starbucks - avoidance - hodge | 32 | 386_tax_hmrc_starbucks_avoidance |
| 387 | sheen - sheens - charlie - brooke - mueller | 32 | 387_sheen_sheens_charlie_brooke |
| 388 | chaney - 4chan - nude - hacker - celebrity | 31 | 388_chaney_4chan_nude_hacker |
| 389 | blair - chilcot - inquiry - iraq - sir | 31 | 389_blair_chilcot_inquiry_iraq |
| 390 | diamond - heist - cannes - jewel - jewelry | 31 | 390_diamond_heist_cannes_jewel |
| 391 | whaling - whale - shepherd - japanese - maru | 31 | 391_whaling_whale_shepherd_japanese |
| 392 | miss - pageant - beauty - contestant - universe | 31 | 392_miss_pageant_beauty_contestant |
| 393 | scientology - miscavige - scientologists - church - org | 31 | 393_scientology_miscavige_scientologists_church |
| 394 | botox - skin - filler - lip - cosmetic | 31 | 394_botox_skin_filler_lip |
| 395 | payday - wonga - loan - lender - fca | 31 | 395_payday_wonga_loan_lender |
| 396 | apple - ebooks - kindle - ebook - publisher | 31 | 396_apple_ebooks_kindle_ebook |
| 397 | ukba - immigration - border - asylum - backlog | 31 | 397_ukba_immigration_border_asylum |
| 398 | cyber - stuxnet - computer - hacker - kaspersky | 31 | 398_cyber_stuxnet_computer_hacker |
| 399 | heroes - ireport - journalism - cnn - allstar | 31 | 399_heroes_ireport_journalism_cnn |
| 400 | priest - lynn - archdiocese - philadelphia - monsignor | 30 | 400_priest_lynn_archdiocese_philadelphia |
| 401 | travolta - travoltas - jett - okorocha - kawasaki | 30 | 401_travolta_travoltas_jett_okorocha |
| 402 | chlamydia - stis - gonorrhea - sti - gonorrhoea | 30 | 402_chlamydia_stis_gonorrhea_sti |
| 403 | bake - ruby - berry - tandoh - hollywood | 30 | 403_bake_ruby_berry_tandoh |
| 404 | mers - sars - virus - coronavirus - respiratory | 30 | 404_mers_sars_virus_coronavirus |
| 405 | kashmir - srinagar - indian - india - pakistan | 30 | 405_kashmir_srinagar_indian_india |
| 406 | princess - charlene - prince - madeleine - royal | 30 | 406_princess_charlene_prince_madeleine |
| 407 | bangkok - flood - thailand - flooding - thai | 30 | 407_bangkok_flood_thailand_flooding |
| 408 | dee - benefits - channel - street - turner | 30 | 408_dee_benefits_channel_street |
| 409 | litvinenko - berezovsky - litvinenkos - russian - kgb | 30 | 409_litvinenko_berezovsky_litvinenkos_russian |
| 410 | bobbi - kristina - gordon - whitney - houston | 30 | 410_bobbi_kristina_gordon_whitney |
| 411 | canyon - park - yosemite - national - rim | 30 | 411_canyon_park_yosemite_national |
| 412 | philpott - mairead - mick - mosley - philpotts | 30 | 412_philpott_mairead_mick_mosley |
| 413 | gun - 3d - printer - liberator - bullet | 30 | 413_gun_3d_printer_liberator |
| 414 | lundberg - oil - gas - gallon - price | 30 | 414_lundberg_oil_gas_gallon |
| 415 | eye - lens - vision - cornea - glaucoma | 29 | 415_eye_lens_vision_cornea |
| 416 | divorce - youngs - scot - reno - young | 29 | 416_divorce_youngs_scot_reno |
| 417 | robertson - duck - dynasty - ae - phil | 29 | 417_robertson_duck_dynasty_ae |
| 418 | samesex - marriage - legalize - gay - bill | 29 | 418_samesex_marriage_legalize_gay |
| 419 | fingerprint - password - sensor - unlock - apple | 29 | 419_fingerprint_password_sensor_unlock |
| 420 | cruise - norovirus - ship - passenger - outbreak | 29 | 420_cruise_norovirus_ship_passenger |
| 421 | sao - paulo - brazil - protest - cup | 29 | 421_sao_paulo_brazil_protest |
| 422 | resort - holiday - hotel - ill - hygiene | 29 | 422_resort_holiday_hotel_ill |
| 423 | gm - crop - genetically - modified - farming | 29 | 423_gm_crop_genetically_modified |
| 424 | cocaine - drug - smuggler - coast - guard | 29 | 424_cocaine_drug_smuggler_coast |
| 425 | blagojevich - illinois - burris - governor - senate | 29 | 425_blagojevich_illinois_burris_governor |
| 426 | teeth - dentist - dental - denture - tooth | 29 | 426_teeth_dentist_dental_denture |
| 427 | clarkson - gear - jeremy - bbc - presenter | 29 | 427_clarkson_gear_jeremy_bbc |
| 428 | harbor - pearl - hiroshima - nagasaki - atomic | 29 | 428_harbor_pearl_hiroshima_nagasaki |
| 429 | waste - recycling - ewaste - recycled - tyre | 29 | 429_waste_recycling_ewaste_recycled |
| 430 | teacher - teachers - mcfarland - union - church | 28 | 430_teacher_teachers_mcfarland_union |
| 431 | sinkhole - hole - sinkholes - swallowed - seffner | 28 | 431_sinkhole_hole_sinkholes_swallowed |
| 432 | filner - filners - mayor - diego - harassment | 28 | 432_filner_filners_mayor_diego |
| 433 | cambodia - temple - cambodian - thai - thailand | 28 | 433_cambodia_temple_cambodian_thai |
| 434 | fragrance - perfume - scent - bottle - eau | 28 | 434_fragrance_perfume_scent_bottle |
| 435 | chinese - hacker - hacking - cyber - china | 28 | 435_chinese_hacker_hacking_cyber |
| 436 | dubai - dalelv - mcredmond - blake - acors | 28 | 436_dubai_dalelv_mcredmond_blake |
| 437 | traveller - caravan - travellers - dale - eviction | 28 | 437_traveller_caravan_travellers_dale |
| 438 | khobragade - devyani - housekeeper - indian - immunity | 28 | 438_khobragade_devyani_housekeeper_indian |
| 439 | giordano - gardner - aruba - aruban - robyn | 28 | 439_giordano_gardner_aruba_aruban |
| 440 | fire - bushfires - bushfire - blaze - adelaide | 28 | 440_fire_bushfires_bushfire_blaze |
| 441 | driving - pennant - footballer - speeding - nash | 28 | 441_driving_pennant_footballer_speeding |
| 442 | downton - abbey - lady - grantham - maggie | 28 | 442_downton_abbey_lady_grantham |
| 443 | lodge - sweat - ray - participant - selfhelp | 27 | 443_lodge_sweat_ray_participant |
| 444 | hamza - almasri - hamzas - quin - abu | 27 | 444_hamza_almasri_hamzas_quin |
| 445 | magnotta - montreal - lafreniere - lin - luka | 27 | 445_magnotta_montreal_lafreniere_lin |
| 446 | sniper - kyle - clint - moore - eastwood | 27 | 446_sniper_kyle_clint_moore |
| 447 | askfm - hannah - bullying - troll - cyberbullying | 27 | 447_askfm_hannah_bullying_troll |
| 448 | google - privacy - googles - gmail - user | 27 | 448_google_privacy_googles_gmail |
| 449 | icebreaker - ice - shokalskiy - akademik - ship | 27 | 449_icebreaker_ice_shokalskiy_akademik |
| 450 | michaella - melissa - mccollum - reid - lima | 27 | 450_michaella_melissa_mccollum_reid |
| 451 | gandolfini - sopranos - gandolfinis - soprano - kobold | 26 | 451_gandolfini_sopranos_gandolfinis_soprano |
| 452 | oldest - imich - okawa - kimura - stoehr | 26 | 452_oldest_imich_okawa_kimura |
| 453 | dress - kate - duchess - pearl - worn | 26 | 453_dress_kate_duchess_pearl |
| 454 | reef - coral - reefs - marine - seaview | 26 | 454_reef_coral_reefs_marine |
| 455 | veronica - capobiancos - adoption - dusten - capobianco | 26 | 455_veronica_capobiancos_adoption_dusten |
| 456 | turing - bletchley - enigma - turings - code | 26 | 456_turing_bletchley_enigma_turings |
| 457 | facebook - social - study - happiness - hedonometer | 26 | 457_facebook_social_study_happiness |
| 458 | solo - solos - stevens - jerramy - soccer | 26 | 458_solo_solos_stevens_jerramy |
| 459 | college - tuition - loan - student - education | 26 | 459_college_tuition_loan_student |
| 460 | inmate - prison - sentencing - mandatory - crack | 26 | 460_inmate_prison_sentencing_mandatory |
| 461 | funeral - alicante - carousel - belt - airport | 26 | 461_funeral_alicante_carousel_belt |
| 462 | constable - policing - buckland - pension - commissioner | 26 | 462_constable_policing_buckland_pension |
| 463 | college - enin - admission - student - sat | 26 | 463_college_enin_admission_student |
| 464 | abortion - termination - bpas - pregnancy - doogan | 26 | 464_abortion_termination_bpas_pregnancy |
| 465 | alzheimers - brain - dementia - disease - cognitive | 26 | 465_alzheimers_brain_dementia_disease |
| 466 | extradition - mckinnon - dunham - mckinnons - extradited | 26 | 466_extradition_mckinnon_dunham_mckinnons |
| 467 | crocodile - reptile - crocodiles - croc - saltwater | 26 | 467_crocodile_reptile_crocodiles_croc |
| 468 | circumcision - circumcised - foreskin - herpes - uncircumcised | 25 | 468_circumcision_circumcised_foreskin_herpes |
| 469 | g4s - buckles - olympic - olympics - games | 25 | 469_g4s_buckles_olympic_olympics |
| 470 | charger - iphone - apple - phone - battery | 25 | 470_charger_iphone_apple_phone |
| 471 | missile - nuclear - air - force - minuteman | 25 | 471_missile_nuclear_air_force |
| 472 | food - hunger - rice - hungry - undernourished | 25 | 472_food_hunger_rice_hungry |
| 473 | lowndes - kendrick - johnsons - mat - gym | 25 | 473_lowndes_kendrick_johnsons_mat |
| 474 | ashya - proton - ashyas - prague - therapy | 25 | 474_ashya_proton_ashyas_prague |
| 475 | disney - disneyland - walt - park - theme | 25 | 475_disney_disneyland_walt_park |
| 476 | bleach - kaur - harding - dickeys - drank | 25 | 476_bleach_kaur_harding_dickeys |
| 477 | gosnell - gosnells - clinic - abortion - philadelphia | 25 | 477_gosnell_gosnells_clinic_abortion |
| 478 | charlottesville - matthew - hannah - virginia - harrington | 25 | 478_charlottesville_matthew_hannah_virginia |
| 479 | ugandan - uganda - homosexuality - gay - homosexual | 25 | 479_ugandan_uganda_homosexuality_gay |
| 480 | deen - deens - paula - bubba - paculis | 25 | 480_deen_deens_paula_bubba |
| 481 | penguin - penguins - albatross - chick - rspb | 25 | 481_penguin_penguins_albatross_chick |
| 482 | statue - liberty - ellis - island - phuket | 25 | 482_statue_liberty_ellis_island |
| 483 | polio - vaccination - pakistan - vaccine - antipolio | 25 | 483_polio_vaccination_pakistan_vaccine |
| 484 | carnage - magaluf - crawl - reveller - roki | 25 | 484_carnage_magaluf_crawl_reveller |
| 485 | dad - mom - noa - cry - parenting | 25 | 485_dad_mom_noa_cry |
| 486 | nyad - swim - nyads - mccardel - jellyfish | 25 | 486_nyad_swim_nyads_mccardel |
| 487 | redskins - native - snyder - name - mascot | 25 | 487_redskins_native_snyder_name |
| 488 | mcafee - belize - faull - mcafees - guatemala | 25 | 488_mcafee_belize_faull_mcafees |
| 489 | mousa - inquiry - lawyers - shiner - iraqis | 25 | 489_mousa_inquiry_lawyers_shiner |
| 490 | orleans - katrina - traylor - levee - hurricane | 24 | 490_orleans_katrina_traylor_levee |
| 491 | alligator - gator - reptile - bobcat - alligators | 24 | 491_alligator_gator_reptile_bobcat |
| 492 | eurostar - train - tunnel - rail - confino | 24 | 492_eurostar_train_tunnel_rail |
| 493 | passport - backlog - application - pugh - office | 24 | 493_passport_backlog_application_pugh |
| 494 | stripper - barbash - gristina - lusty - prostitution | 24 | 494_stripper_barbash_gristina_lusty |
| 495 | caffeine - coffee - drink - energy - drinks | 24 | 495_caffeine_coffee_drink_energy |
| 496 | qaeda - al - yemen - alawlaki - embassy | 24 | 496_qaeda_al_yemen_alawlaki |
| 497 | saudi - olympic - arabia - athlete - ioc | 24 | 497_saudi_olympic_arabia_athlete |
| 498 | greenpeace - arctic - russian - sunrise - activist | 24 | 498_greenpeace_arctic_russian_sunrise |
| 499 | pryce - huhne - briscoe - vicky - speeding | 24 | 499_pryce_huhne_briscoe_vicky |
| 500 | adhd - disorder - hyperactivity - ritalin - stimulant | 24 | 500_adhd_disorder_hyperactivity_ritalin |
| 501 | makeup - beauty - lipstick - lip - skin | 24 | 501_makeup_beauty_lipstick_lip |
| 502 | jesus - manuscript - papyrus - gospel - bible | 24 | 502_jesus_manuscript_papyrus_gospel |
| 503 | student - students - university - cambridge - drinking | 24 | 503_student_students_university_cambridge |
| 504 | twitter - tweet - kutcher - twitters - hashtags | 24 | 504_twitter_tweet_kutcher_twitters |
| 505 | triathlon - workout - crossfit - bike - brynn | 23 | 505_triathlon_workout_crossfit_bike |
| 506 | mcdonnell - mcdonnells - maureen - williams - morrissey | 23 | 506_mcdonnell_mcdonnells_maureen_williams |
| 507 | sport - pe - olympic - olympics - school | 23 | 507_sport_pe_olympic_olympics |
| 508 | winkle - fieri - walmart - degraff - stolen | 23 | 508_winkle_fieri_walmart_degraff |
| 509 | morgan - roper - walmart - limo - morgans | 23 | 509_morgan_roper_walmart_limo |
| 510 | bag - plastic - bags - 5p - singleuse | 23 | 510_bag_plastic_bags_5p |
| 511 | howard - pc - bowman - discrimination - tribunal | 23 | 511_howard_pc_bowman_discrimination |
| 512 | tostee - tostees - gable - warriena - wright | 23 | 512_tostee_tostees_gable_warriena |
| 513 | f35 - f22 - fighter - air - jet | 23 | 513_f35_f22_fighter_air |
| 514 | greste - fahmy - jazeera - mohamed - baher | 23 | 514_greste_fahmy_jazeera_mohamed |
| 515 | jutting - kong - juttings - hong - rurik | 23 | 515_jutting_kong_juttings_hong |
| 516 | dna - genome - synthetic - yeast - genetic | 23 | 516_dna_genome_synthetic_yeast |
| 517 | gammy - surrogacy - surrogate - thai - gammys | 23 | 517_gammy_surrogacy_surrogate_thai |
| 518 | collins - grant - 7th - faye - heaven | 23 | 518_collins_grant_7th_faye |
| 519 | pension - annuity - pensions - saver - retirement | 23 | 519_pension_annuity_pensions_saver |
| 520 | russell - housewives - russells - barrino - taylor | 23 | 520_russell_housewives_russells_barrino |
| 521 | council - a4e - councillor - parryjones - allowance | 23 | 521_council_a4e_councillor_parryjones |
| 522 | maternal - childbirth - reproductive - birth - mortality | 23 | 522_maternal_childbirth_reproductive_birth |
| 523 | riga - moscow - latvias - fire - russia | 22 | 523_riga_moscow_latvias_fire |
| 524 | hollande - tax - french - arnault - france | 22 | 524_hollande_tax_french_arnault |
| 525 | mps - ipsa - expense - mp - salary | 22 | 525_mps_ipsa_expense_mp |
| 526 | jackson - jacksons - michael - goodall - balser | 22 | 526_jackson_jacksons_michael_goodall |
| 527 | vodianova - moscow - prokudingorsky - tsar - nv | 22 | 527_vodianova_moscow_prokudingorsky_tsar |
| 528 | horse - isaacson - ruggeasey - bronwen - winterburn | 22 | 528_horse_isaacson_ruggeasey_bronwen |
| 529 | fayed - diana - dianas - dodi - burrell | 22 | 529_fayed_diana_dianas_dodi |
| 530 | betting - gambling - fobts - shop - bookmaker | 22 | 530_betting_gambling_fobts_shop |
| 531 | lawrence - norris - dobson - stephen - acourt | 22 | 531_lawrence_norris_dobson_stephen |
| 532 | kobane - kurdish - kobani - isis - turkey | 22 | 532_kobane_kurdish_kobani_isis |
| 533 | cliff - sir - bbc - yorkshire - raid | 22 | 533_cliff_sir_bbc_yorkshire |
| 534 | atf - furious - holder - fast - osorioarellanes | 22 | 534_atf_furious_holder_fast |
| 535 | hpv - vaccine - cervical - cancer - gardasil | 22 | 535_hpv_vaccine_cervical_cancer |
| 536 | bitcoin - currency - bitcoins - digital - virtual | 21 | 536_bitcoin_currency_bitcoins_digital |
| 537 | westboro - baptist - church - phelps - picket | 21 | 537_westboro_baptist_church_phelps |
| 538 | flu - h7n9 - virus - poultry - bird | 21 | 538_flu_h7n9_virus_poultry |
| 539 | bowl - puppy - super - ad - godaddy | 21 | 539_bowl_puppy_super_ad |
| 540 | pele - peles - kidney - einstein - edinho | 21 | 540_pele_peles_kidney_einstein |
| 541 | eurovision - contest - conchita - song - azerbaijan | 21 | 541_eurovision_contest_conchita_song |
| 542 | bucket - als - challenge - ice - frates | 21 | 542_bucket_als_challenge_ice |
| 543 | guantanamo - detainee - prisoner - gitmo - bay | 21 | 543_guantanamo_detainee_prisoner_gitmo |
| 544 | autism - autistic - ocd - disorder - nac | 21 | 544_autism_autistic_ocd_disorder |
| 545 | gaza - blockade - israeli - israel - ship | 21 | 545_gaza_blockade_israeli_israel |
| 546 | dotcom - megaupload - dotcoms - copyright - piracy | 21 | 546_dotcom_megaupload_dotcoms_copyright |
| 547 | milk - melamine - dairy - arsenic - rice | 21 | 547_milk_melamine_dairy_arsenic |
| 548 | breast - mastectomy - cancer - angelina - gene | 21 | 548_breast_mastectomy_cancer_angelina |
| 549 | netflix - streaming - tv - netflixs - wuaki | 21 | 549_netflix_streaming_tv_netflixs |
| 550 | hobbit - tolkien - rings - trilogy - tolkiens | 21 | 550_hobbit_tolkien_rings_trilogy |
| 551 | heathrow - runway - airport - estuary - boris | 21 | 551_heathrow_runway_airport_estuary |
| 552 | dow - stock - sp - nasdaq - index | 21 | 552_dow_stock_sp_nasdaq |
| 553 | kassig - isis - doureihi - alberici - mansouri | 21 | 553_kassig_isis_doureihi_alberici |
| 554 | ferry - capsized - boat - bangladesh - sank | 21 | 554_ferry_capsized_boat_bangladesh |
| 555 | lunch - school - nutrition - food - healthier | 21 | 555_lunch_school_nutrition_food |
| 556 | mitochondrial - embryo - mitochondrion - dna - egg | 21 | 556_mitochondrial_embryo_mitochondrion_dna |
| 557 | rivers - endoscopy - korovin - yorkville - joan | 21 | 557_rivers_endoscopy_korovin_yorkville |
| 558 | cull - badger - tb - mcintosh - culling | 21 | 558_cull_badger_tb_mcintosh |
| 559 | tribe - indigenous - guarani - indians - totobiegosode | 20 | 559_tribe_indigenous_guarani_indians |
| 560 | policing - officer - crime - constable - pcsos | 20 | 560_policing_officer_crime_constable |
| 561 | student - strickland - barron - barrons - school | 20 | 561_student_strickland_barron_barrons |
| 562 | flag - union - saltire - flags - nepalese | 20 | 562_flag_union_saltire_flags |
| 563 | hagel - secretary - pentagon - carter - defense | 20 | 563_hagel_secretary_pentagon_carter |
| 564 | pole - trek - harry - antarctica - wounded | 20 | 564_pole_trek_harry_antarctica |
| 565 | loshagin - rakossi - gorulenko - kurochkin - abdullaev | 20 | 565_loshagin_rakossi_gorulenko_kurochkin |
| 566 | roma - ruseva - maria - greece - bulgarian | 20 | 566_roma_ruseva_maria_greece |
| 567 | disney - elsa - frozen - menzel - cinderella | 20 | 567_disney_elsa_frozen_menzel |
| 568 | han - subway - train - platform - mickens | 20 | 568_han_subway_train_platform |
| 569 | strictly - dance - dancing - alesha - bussell | 20 | 569_strictly_dance_dancing_alesha |
| 570 | nobel - prize - peace - oslo - award | 20 | 570_nobel_prize_peace_oslo |
| 571 | handwriting - font - typewriter - suvir - cursive | 20 | 571_handwriting_font_typewriter_suvir |
| 572 | rmt - strike - tube - aslef - tfl | 20 | 572_rmt_strike_tube_aslef |
| 573 | vitamin - supplement - calcium - irx3 - multivitamin | 20 | 573_vitamin_supplement_calcium_irx3 |
| 574 | kilpatrick - nagin - beatty - detroit - kilpatricks | 20 | 574_kilpatrick_nagin_beatty_detroit |
| 575 | boulder - flooding - rain - colorado - county | 20 | 575_boulder_flooding_rain_colorado |
| 576 | skin - cream - collagen - venom - wrinkle | 20 | 576_skin_cream_collagen_venom |
| 577 | catalonia - catalan - independence - spains - spain | 20 | 577_catalonia_catalan_independence_spains |
| 578 | rail - highspeed - transportation - project - amtrak | 20 | 578_rail_highspeed_transportation_project |
| 579 | arpaio - arpaios - sheriff - maricopa - latinos | 20 | 579_arpaio_arpaios_sheriff_maricopa |
| 580 | polanski - polanskis - geimer - roman - 13yearold | 19 | 580_polanski_polanskis_geimer_roman |
| 581 | algerian - hostage - algeria - mali - belmokhtar | 19 | 581_algerian_hostage_algeria_mali |
| 582 | prosthetic - hand - ilimb - bionic - limb | 19 | 582_prosthetic_hand_ilimb_bionic |
| 583 | cho - nut - korean - macadamia - hyunah | 19 | 583_cho_nut_korean_macadamia |
| 584 | ascot - racegoers - hat - racegoer - dress | 19 | 584_ascot_racegoers_hat_racegoer |
| 585 | bin - laden - ladens - oneill - osama | 19 | 585_bin_laden_ladens_oneill |
| 586 | wisconsin - walker - bargaining - walkers - collective | 19 | 586_wisconsin_walker_bargaining_walkers |
| 587 | corruption - corrupt - index - transparency - ranked | 19 | 587_corruption_corrupt_index_transparency |
| 588 | bridge - bridges - span - skagit - collapse | 19 | 588_bridge_bridges_span_skagit |
| 589 | thailand - armstrongbland - janson - koh - hart | 19 | 589_thailand_armstrongbland_janson_koh |
| 590 | bieber - justin - selena - biebers - gomez | 19 | 590_bieber_justin_selena_biebers |
| 591 | jahi - jahis - mcmath - winkfield - oakland | 19 | 591_jahi_jahis_mcmath_winkfield |
| 592 | carnival - notting - hill - festival - reveller | 19 | 592_carnival_notting_hill_festival |
| 593 | fat - weight - settles - plussize - hopkins | 19 | 593_fat_weight_settles_plussize |
| 594 | train - rail - railway - engineering - trains | 19 | 594_train_rail_railway_engineering |
| 595 | benefits - benefit - minkin - stephanie - sisarova | 19 | 595_benefits_benefit_minkin_stephanie |
| 596 | rambold - baugh - cherice - hanlon - rambolds | 19 | 596_rambold_baugh_cherice_hanlon |
| 597 | neknominate - alcohol - drinking - vodka - craze | 19 | 597_neknominate_alcohol_drinking_vodka |
| 598 | dyson - vacuum - cleaner - gtech - dysons | 19 | 598_dyson_vacuum_cleaner_gtech |
| 599 | ulbricht - silk - bitcoins - bitcoin - ulbrichts | 19 | 599_ulbricht_silk_bitcoins_bitcoin |
| 600 | lights - milky - aurora - sky - northern | 19 | 600_lights_milky_aurora_sky |
| 601 | budget - pentagon - defense - panetta - sequestration | 18 | 601_budget_pentagon_defense_panetta |
| 602 | ivins - anthrax - lab - smallpox - cdc | 18 | 602_ivins_anthrax_lab_smallpox |
| 603 | marikana - lonmin - mine - miner - platinum | 18 | 603_marikana_lonmin_mine_miner |
| 604 | belcher - perkins - chiefs - martel - arrowhead | 18 | 604_belcher_perkins_chiefs_martel |
| 605 | cholera - mintz - juba - flooding - flood | 18 | 605_cholera_mintz_juba_flooding |
| 606 | navy - hms - commander - portland - cdr | 18 | 606_navy_hms_commander_portland |
| 607 | chestnut - kobayashi - nathans - hot - contest | 18 | 607_chestnut_kobayashi_nathans_hot |
| 608 | nepal - maoists - maoist - nepals - katawal | 18 | 608_nepal_maoists_maoist_nepals |
| 609 | watts - reese - witherspoon - toth - lucas | 18 | 609_watts_reese_witherspoon_toth |
| 610 | manhattan - city - neon - downtown - york | 18 | 610_manhattan_city_neon_downtown |
| 611 | amoeba - kali - naegleria - kalis - hardig | 18 | 611_amoeba_kali_naegleria_kalis |
| 612 | mayan - apocalypse - maya - calendar - mayans | 18 | 612_mayan_apocalypse_maya_calendar |
| 613 | nuclear - treaty - weapon - russia - missile | 18 | 613_nuclear_treaty_weapon_russia |
| 614 | hinckley - hinckleys - brady - reagan - williamsburg | 18 | 614_hinckley_hinckleys_brady_reagan |
| 615 | toy - moshi - playdoh - shopkins - toys | 18 | 615_toy_moshi_playdoh_shopkins |
| 616 | buenos - aires - argentina - aymara - salva | 18 | 616_buenos_aires_argentina_aymara |
| 617 | chinese - wu - china - chinas - gay | 18 | 617_chinese_wu_china_chinas |
| 618 | mack - schaefer - macks - wiesemack - bali | 18 | 618_mack_schaefer_macks_wiesemack |
| 619 | accent - dialect - cromarty - language - syndrome | 18 | 619_accent_dialect_cromarty_language |
| 620 | pupil - barnwell - teaching - camurat - school | 18 | 620_pupil_barnwell_teaching_camurat |
| 621 | wolfe - wren - gosk - skyler - gender | 18 | 621_wolfe_wren_gosk_skyler |
| 622 | plate - database - camera - cctv - license | 18 | 622_plate_database_camera_cctv |
| 623 | carta - magna - 1215 - library - copy | 18 | 623_carta_magna_1215_library |
| 624 | marilyn - monroe - jasgur - photograph - monroes | 18 | 624_marilyn_monroe_jasgur_photograph |
| 625 | apparel - advert - slogan - tshirt - amazon | 18 | 625_apparel_advert_slogan_tshirt |
| 626 | viagra - oestrogen - menopause - erectile - erection | 18 | 626_viagra_oestrogen_menopause_erectile |
| 627 | meriam - wani - sudanese - ibrahim - sudan | 18 | 627_meriam_wani_sudanese_ibrahim |
| 628 | spaceshiptwo - galactic - virgin - alsbury - mojave | 18 | 628_spaceshiptwo_galactic_virgin_alsbury |
| 629 | rubbish - bin - waste - collection - council | 18 | 629_rubbish_bin_waste_collection |
| 630 | pardon - barbour - ozment - pardoned - mississippi | 18 | 630_pardon_barbour_ozment_pardoned |
| 631 | pilot - navy - jet - besal - coast | 18 | 631_pilot_navy_jet_besal |
| 632 | bollywood - slumdog - bachchan - film - cinema | 17 | 632_bollywood_slumdog_bachchan_film |
| 633 | holiday - termtime - school - fine - fined | 17 | 633_holiday_termtime_school_fine |
| 634 | amoudi - paton - alwaleed - saudi - prince | 17 | 634_amoudi_paton_alwaleed_saudi |
| 635 | evolution - creationism - scientific - intelligent - darwins | 17 | 635_evolution_creationism_scientific_intelligent |
| 636 | hepatitis - kwiatkowski - dental - patient - dentist | 17 | 636_hepatitis_kwiatkowski_dental_patient |
| 637 | marrocco - transplant - arm - lautzenheiser - limb | 17 | 637_marrocco_transplant_arm_lautzenheiser |
| 638 | clown - clowns - northampton - beep - coulrophobia | 17 | 638_clown_clowns_northampton_beep |
| 639 | flowers - coop - methodist - cooperative - bank | 17 | 639_flowers_coop_methodist_cooperative |
| 640 | artificial - ai - deepmind - intelligence - machine | 17 | 640_artificial_ai_deepmind_intelligence |
| 641 | venables - bulger - fergus - bootle - thompson | 17 | 641_venables_bulger_fergus_bootle |
| 642 | badenclay - allison - allisons - badenclays - gerard | 17 | 642_badenclay_allison_allisons_badenclays |
| 643 | grayling - prisoner - offender - prison - justice | 17 | 643_grayling_prisoner_offender_prison |
| 644 | fritzl - elisabeth - cellar - amstetten - josef | 17 | 644_fritzl_elisabeth_cellar_amstetten |
| 645 | bacteria - germ - antibacterial - towel - wash | 17 | 645_bacteria_germ_antibacterial_towel |
| 646 | statin - statins - cholesterol - yeast - risk | 17 | 646_statin_statins_cholesterol_yeast |
| 647 | drone - iran - iranian - rq170 - aircraft | 17 | 647_drone_iran_iranian_rq170 |
| 648 | watkins - mjadzelics - lostprophets - pontypridd - ian | 17 | 648_watkins_mjadzelics_lostprophets_pontypridd |
| 649 | gates - crowley - cambridge - stupidly - harvard | 17 | 649_gates_crowley_cambridge_stupidly |
| 650 | roma - valls - camp - evry - france | 16 | 650_roma_valls_camp_evry |
| 651 | barzee - mitchell - smart - mitchells - smarts | 16 | 651_barzee_mitchell_smart_mitchells |
| 652 | hindley - brady - saddleworth - keith - keiths | 16 | 652_hindley_brady_saddleworth_keith |
| 653 | implant - ear - cochlear - hearing - deaf | 16 | 653_implant_ear_cochlear_hearing |
| 654 | text - internet - texting - pew - survey | 16 | 654_text_internet_texting_pew |
| 655 | jagger - lwren - stones - mick - scotts | 16 | 655_jagger_lwren_stones_mick |
| 656 | beatrix - maxima - willemalexander - queen - throne | 16 | 656_beatrix_maxima_willemalexander_queen |
| 657 | maglev - train - highspeed - rail - railway | 16 | 657_maglev_train_highspeed_rail |
| 658 | davos - wef - forum - economic - delegate | 16 | 658_davos_wef_forum_economic |
| 659 | abbott - putin - g20 - apec - summit | 16 | 659_abbott_putin_g20_apec |
| 660 | nelson - nelsons - trafalgar - hms - admiral | 16 | 660_nelson_nelsons_trafalgar_hms |
| 661 | marijuana - drug - uruguay - coca - decriminalization | 16 | 661_marijuana_drug_uruguay_coca |
| 662 | oni - konye - acid - naomi - niqab | 16 | 662_oni_konye_acid_naomi |
| 663 | skirt - uniform - trouser - pupil - school | 16 | 663_skirt_uniform_trouser_pupil |
| 664 | diet - dairy - eating - fat - weight | 16 | 664_diet_dairy_eating_fat |
| 665 | violin - stradivarius - instrument - stradivari - violins | 16 | 665_violin_stradivarius_instrument_stradivari |
| 666 | reef - container - mnz - rena - ship | 16 | 666_reef_container_mnz_rena |
| 667 | gabor - anhalt - zsa - gabors - von | 16 | 667_gabor_anhalt_zsa_gabors |
| 668 | ecigarette - charger - ecigarettes - exploded - charging | 16 | 668_ecigarette_charger_ecigarettes_exploded |
| 669 | ski - chalet - skiing - resort - skier | 16 | 669_ski_chalet_skiing_resort |
| 670 | thai - rohingya - thailand - myanmar - refugee | 16 | 670_thai_rohingya_thailand_myanmar |
| 671 | porn - condom - hiv - leathers - moratorium | 16 | 671_porn_condom_hiv_leathers |
| 672 | omega3 - fish - triglyceride - oily - fatty | 16 | 672_omega3_fish_triglyceride_oily |
| 673 | amish - mullet - mullets - beard - haircutting | 16 | 673_amish_mullet_mullets_beard |
| 674 | houston - houstons - whitney - winans - davis | 16 | 674_houston_houstons_whitney_winans |
| 675 | sierra - sarai - sierras - istanbul - galata | 16 | 675_sierra_sarai_sierras_istanbul |
| 676 | ballet - dance - mickael - dancer - acosta | 16 | 676_ballet_dance_mickael_dancer |
| 677 | jackson - jesse - sandi - jacksons - jr | 16 | 677_jackson_jesse_sandi_jacksons |
| 678 | taser - tasers - stun - officer - laudisio | 16 | 678_taser_tasers_stun_officer |
| 679 | revenge - porn - bollaert - explicit - posting | 16 | 679_revenge_porn_bollaert_explicit |
| 680 | expedition - antarctic - scotts - shackleton - shackletons | 16 | 680_expedition_antarctic_scotts_shackleton |
| 681 | wheatley - hmp - prison - standford - prisoner | 16 | 681_wheatley_hmp_prison_standford |
| 682 | queen - portrait - belmar - warhol - royal | 15 | 682_queen_portrait_belmar_warhol |
| 683 | huangs - huang - qatari - qatar - gloria | 15 | 683_huangs_huang_qatari_qatar |
| 684 | favourite - fan - yougov - personalitywise - hobbies | 15 | 684_favourite_fan_yougov_personalitywise |
| 685 | knight - suge - shakur - rap - compton | 15 | 685_knight_suge_shakur_rap |
| 686 | dozier - kimmerle - marianna - buried - graf | 15 | 686_dozier_kimmerle_marianna_buried |
| 687 | fcc - neutrality - internet - provider - net | 15 | 687_fcc_neutrality_internet_provider |
| 688 | immigration - whitman - immigrant - undocumented - citizenship | 15 | 688_immigration_whitman_immigrant_undocumented |
| 689 | tower - building - skyscraper - subway - pavilion | 15 | 689_tower_building_skyscraper_subway |
| 690 | rebecca - bullying - judd - rebeccas - sedwick | 15 | 690_rebecca_bullying_judd_rebeccas |
| 691 | berezovsky - abramovich - pugachev - chistyakov - oligarch | 15 | 691_berezovsky_abramovich_pugachev_chistyakov |
| 692 | rat - rats - pest - rodent - piedmont | 15 | 692_rat_rats_pest_rodent |
| 693 | cameron - samantha - ivan - camerons - chequers | 15 | 693_cameron_samantha_ivan_camerons |
| 694 | blasio - mayor - snow - de - roker | 15 | 694_blasio_mayor_snow_de |
| 695 | kitty - hello - sanrio - tsuji - cat | 15 | 695_kitty_hello_sanrio_tsuji |
| 696 | megrahi - lockerbie - almegrahi - megrahis - libya | 15 | 696_megrahi_lockerbie_almegrahi_megrahis |
| 697 | lexi - hollinghurst - sandpiper - liam - operator | 15 | 697_lexi_hollinghurst_sandpiper_liam |
| 698 | ons - married - stepfamilies - proportion - cent | 15 | 698_ons_married_stepfamilies_proportion |
| 699 | mckenna - miller - bode - beck - millers | 15 | 699_mckenna_miller_bode_beck |
| 700 | implant - pip - implants - breast - mas | 15 | 700_implant_pip_implants_breast |
| 701 | blasio - nypd - mayor - de - garner | 15 | 701_blasio_nypd_mayor_de |
| 702 | drug - heroin - drugs - ecstasy - khat | 15 | 702_drug_heroin_drugs_ecstasy |
| 703 | williams - robin - depression - doubtfire - parkinsons | 15 | 703_williams_robin_depression_doubtfire |
| 704 | niger - oil - delta - mend - nigerias | 15 | 704_niger_oil_delta_mend |
| 705 | spains - protest - madrid - demonstration - austerity | 15 | 705_spains_protest_madrid_demonstration |
| 706 | dakar - peterhansel - despres - rally - stage | 15 | 706_dakar_peterhansel_despres_rally |
| 707 | missile - satellite - defense - pentagon - hydrazine | 15 | 707_missile_satellite_defense_pentagon |
| 708 | mammoth - woolly - tusk - fossil - mammoths | 15 | 708_mammoth_woolly_tusk_fossil |
| 709 | qatada - qatadas - jordan - abu - deport | 15 | 709_qatada_qatadas_jordan_abu |
| 710 | ciancia - tsa - hernandez - airport - gerardo | 15 | 710_ciancia_tsa_hernandez_airport |
| 711 | spider - bite - bitten - widow - false | 15 | 711_spider_bite_bitten_widow |
| 712 | roaming - bt - mobile - broadband - comcast | 15 | 712_roaming_bt_mobile_broadband |
| 713 | labeouf - shia - labeoufs - actor - clowes | 15 | 713_labeouf_shia_labeoufs_actor |
| 714 | hajj - pilgrim - mecca - pilgrimage - kaaba | 15 | 714_hajj_pilgrim_mecca_pilgrimage |
| 715 | prabowo - widodo - jakarta - indonesia - jokowi | 15 | 715_prabowo_widodo_jakarta_indonesia |
| 716 | duggan - duggans - ipcc - tottenham - mark | 15 | 716_duggan_duggans_ipcc_tottenham |
| 717 | christmas - festive - shopping - gift - spend | 14 | 717_christmas_festive_shopping_gift |
| 718 | nujood - marriage - saudi - nada - yemen | 14 | 718_nujood_marriage_saudi_nada |
| 719 | bag - herms - mulberry - vuitton - oshkosh | 14 | 719_bag_herms_mulberry_vuitton |
| 720 | insect - salmon - bickerton - locust - fish | 14 | 720_insect_salmon_bickerton_locust |
| 721 | isis - jordanian - alkasasbeh - islamic - kasasbeh | 14 | 721_isis_jordanian_alkasasbeh_islamic |
| 722 | mississippi - river - atchafalaya - spillway - vicksburg | 14 | 722_mississippi_river_atchafalaya_spillway |
| 723 | eroshevich - stern - smiths - smith - kapoor | 14 | 723_eroshevich_stern_smiths_smith |
| 724 | munoz - marlise - erick - fetus - munozs | 14 | 724_munoz_marlise_erick_fetus |
| 725 | fata - medicare - fatas - fraud - medicaid | 14 | 725_fata_medicare_fatas_fraud |
| 726 | horman - kyron - terri - kyrons - kaine | 14 | 726_horman_kyron_terri_kyrons |
| 727 | prescription - heroin - drug - opioid - painkiller | 14 | 727_prescription_heroin_drug_opioid |
| 728 | karoshi - strike - france - paris - aulnay | 14 | 728_karoshi_strike_france_paris |
| 729 | weier - geyser - slender - slenderman - leutner | 14 | 729_weier_geyser_slender_slenderman |
| 730 | dementia - alzheimers - diagnosis - cure - pfizer | 14 | 730_dementia_alzheimers_diagnosis_cure |
| 731 | typhoon - taiwan - philippines - fujian - xinhua | 14 | 731_typhoon_taiwan_philippines_fujian |
| 732 | python - cleese - monty - pythons - idle | 14 | 732_python_cleese_monty_pythons |
| 733 | william - raf - helicopter - prince - duke | 14 | 733_william_raf_helicopter_prince |
| 734 | clooney - amal - alamuddin - clooneys - amals | 14 | 734_clooney_amal_alamuddin_clooneys |
| 735 | porn - pornography - explicit - sexting - online | 14 | 735_porn_pornography_explicit_sexting |
| 736 | orchestra - gergiev - conductor - symphony - musician | 14 | 736_orchestra_gergiev_conductor_symphony |
| 737 | bout - farc - indictment - dea - hunter | 14 | 737_bout_farc_indictment_dea |
| 738 | galactic - virgin - spaceport - branson - space | 14 | 738_galactic_virgin_spaceport_branson |
| 739 | ageing - apoe - vitamin - lifespan - alzheimers | 14 | 739_ageing_apoe_vitamin_lifespan |
| 740 | bangkok - moradi - thai - iranian - israeli | 14 | 740_bangkok_moradi_thai_iranian |
| 741 | foreclosure - foreclosed - trump - hud - kollars | 14 | 741_foreclosure_foreclosed_trump_hud |
| 742 | gosselin - jon - kate - mady - tlc | 14 | 742_gosselin_jon_kate_mady |
| 743 | patient - burley - iqbal - gmc - panel | 14 | 743_patient_burley_iqbal_gmc |
| 744 | rousseff - brazils - brazil - neves - brazilian | 14 | 744_rousseff_brazils_brazil_neves |
| 745 | chism - ritzer - danvers - ritzers - chisms | 14 | 745_chism_ritzer_danvers_ritzers |
| 746 | mali - gourdel - french - belmoktar - hostage | 14 | 746_mali_gourdel_french_belmoktar |
| 747 | castresana - montt - guatemala - ixil - rios | 14 | 747_castresana_montt_guatemala_ixil |
| 748 | spelling - bee - lala - scripps - kush | 14 | 748_spelling_bee_lala_scripps |
| 749 | ipo - zuckerberg - facebooks - stock - facebook | 14 | 749_ipo_zuckerberg_facebooks_stock |
| 750 | zanzibar - trup - gee - kirstie - acid | 14 | 750_zanzibar_trup_gee_kirstie |
| 751 | jubilee - tower - queen - diamond - frecklington | 13 | 751_jubilee_tower_queen_diamond |
| 752 | carlisle - lorry - clarke - carlisles - a64 | 13 | 752_carlisle_lorry_clarke_carlisles |
| 753 | chinese - china - confucius - education - student | 13 | 753_chinese_china_confucius_education |
| 754 | garden - snowdrop - sidmouth - plant - tree | 13 | 754_garden_snowdrop_sidmouth_plant |
| 755 | deforestation - forest - rainforest - indonesia - dioxide | 13 | 755_deforestation_forest_rainforest_indonesia |
| 756 | clark - clarks - huguette - heiress - reclusive | 13 | 756_clark_clarks_huguette_heiress |
| 757 | migraine - headache - aura - trigeminal - neuralgia | 13 | 757_migraine_headache_aura_trigeminal |
| 758 | clinton - clintons - clot - hillary - concussion | 13 | 758_clinton_clintons_clot_hillary |
| 759 | dmlaterbundle - twittervia - zann - lanza - ilfracombe | 13 | 759_dmlaterbundle_twittervia_zann_lanza |
| 760 | fashion - african - bortolussi - designer - kinabuti | 13 | 760_fashion_african_bortolussi_designer |
| 761 | cliff - chalet - ridgemont - erosion - landslide | 13 | 761_cliff_chalet_ridgemont_erosion |
| 762 | thanksgiving - aaa - traveler - travel - airline | 13 | 762_thanksgiving_aaa_traveler_travel |
| 763 | mccluskie - mccluskies - gemma - canal - eastenders | 13 | 763_mccluskie_mccluskies_gemma_canal |
| 764 | heaney - poet - thomass - thomas - poetry | 13 | 764_heaney_poet_thomass_thomas |
| 765 | seat - stroller - graco - recall - bumbo | 13 | 765_seat_stroller_graco_recall |
| 766 | pension - retirement - age - pensions - bichard | 13 | 766_pension_retirement_age_pensions |
| 767 | tiller - roeder - tillers - abortion - antiabortion | 13 | 767_tiller_roeder_tillers_abortion |
| 768 | japanese - okinawa - hadnott - japan - guam | 13 | 768_japanese_okinawa_hadnott_japan |
| 769 | arafat - arafats - polonium - palestinian - polonium210 | 13 | 769_arafat_arafats_polonium_palestinian |
| 770 | laden - bin - thirty - cia - zero | 13 | 770_laden_bin_thirty_cia |
| 771 | exorcism - possessed - exorcist - demon - priest | 13 | 771_exorcism_possessed_exorcist_demon |
| 772 | refugee - guterres - syrian - syrians - refugees | 13 | 772_refugee_guterres_syrian_syrians |
| 773 | facebook - snapchat - user - social - princeton | 13 | 773_facebook_snapchat_user_social |
| 774 | itu - internet - treaty - wcit - telecommunication | 13 | 774_itu_internet_treaty_wcit |
| 775 | keyes - koenig - anchorage - koenigs - currier | 13 | 775_keyes_koenig_anchorage_koenigs |
| 776 | mi6 - williams - holdall - gareth - bag | 13 | 776_mi6_williams_holdall_gareth |
| 777 | whiplash - insurance - insurer - motor - premium | 13 | 777_whiplash_insurance_insurer_motor |
| 778 | signhild - snyder - reginella - lynsi - kotak | 13 | 778_signhild_snyder_reginella_lynsi |
| 779 | pemberton - laude - philippine - olongapo - manila | 13 | 779_pemberton_laude_philippine_olongapo |
| 780 | crime - force - constable - inspector - unrecorded | 13 | 780_crime_force_constable_inspector |
| 781 | jews - antisemitic - antisemitism - jewish - holocaust | 13 | 781_jews_antisemitic_antisemitism_jewish |
| 782 | vineberg - hoffman - heroin - hoffmans - seymour | 13 | 782_vineberg_hoffman_heroin_hoffmans |
| 783 | airasia - indonesian - fuselage - surabaya - plane | 13 | 783_airasia_indonesian_fuselage_surabaya |
| 784 | population - billion - stutz - growth - cartogram | 13 | 784_population_billion_stutz_growth |
| 785 | earthquake - quake - magnitude - iran - irna | 13 | 785_earthquake_quake_magnitude_iran |
| 786 | restaurant - chef - michelin - roca - dish | 12 | 786_restaurant_chef_michelin_roca |
| 787 | benghazi - consulate - stevens - libya - ambassador | 12 | 787_benghazi_consulate_stevens_libya |
| 788 | greenslate - food - stamp - snap - stamps | 12 | 788_greenslate_food_stamp_snap |
| 789 | dookhan - dookhans - chemist - lab - massachusetts | 12 | 789_dookhan_dookhans_chemist_lab |
| 790 | samesex - gender - seijas - gay - lulu | 12 | 790_samesex_gender_seijas_gay |
| 791 | contraception - religious - contraceptive - coverage - mandate | 12 | 791_contraception_religious_contraceptive_coverage |
| 792 | zellweger - kabbalah - ellar - horton - bradley | 12 | 792_zellweger_kabbalah_ellar_horton |
| 793 | prom - farves - homecoming - davuluri - kropp | 12 | 793_prom_farves_homecoming_davuluri |
| 794 | gurion - tel - aviv - israel - airline | 12 | 794_gurion_tel_aviv_israel |
| 795 | rotherham - sexual - abuse - asian - grooming | 12 | 795_rotherham_sexual_abuse_asian |
| 796 | hair - haircut - shave - academy - shaved | 12 | 796_hair_haircut_shave_academy |
| 797 | paris - padlock - pont - seine - bridge | 12 | 797_paris_padlock_pont_seine |
| 798 | jolie - hague - summit - angelina - violence | 12 | 798_jolie_hague_summit_angelina |
| 799 | iplayer - bbc - licence - catchup - hd | 12 | 799_iplayer_bbc_licence_catchup |
| 800 | clock - westworth - daylight - sundial - maggiolo | 12 | 800_clock_westworth_daylight_sundial |
| 801 | rivers - joan - karen - apprentice - trump | 12 | 801_rivers_joan_karen_apprentice |
| 802 | strike - union - walkout - ballot - hedley | 12 | 802_strike_union_walkout_ballot |
| 803 | suri - katie - cruise - tom - holmes | 12 | 803_suri_katie_cruise_tom |
| 804 | oculus - virtual - headset - vr - rift | 12 | 804_oculus_virtual_headset_vr |
| 805 | rahman - hamlets - lutfur - cerit - electoral | 12 | 805_rahman_hamlets_lutfur_cerit |
| 806 | apd - tax - haul - duty - passenger | 12 | 806_apd_tax_haul_duty |
| 807 | delhi - commonwealth - games - india - fennell | 12 | 807_delhi_commonwealth_games_india |
| 808 | asiana - ktvu - 214 - ntsb - intern | 12 | 808_asiana_ktvu_214_ntsb |
| 809 | lucan - rivett - lucans - bingham - lord | 12 | 809_lucan_rivett_lucans_bingham |
| 810 | hut - beach - scalpay - island - widmouth | 12 | 810_hut_beach_scalpay_island |
| 811 | hamm - arnall - continental - hamms - oklahoma | 12 | 811_hamm_arnall_continental_hamms |
| 812 | game - gamers - violent - violence - gamergate | 12 | 812_game_gamers_violent_violence |
| 813 | immigrant - immigration - dolon - detention - deportation | 12 | 813_immigrant_immigration_dolon_detention |
| 814 | singapore - singapores - singaporeans - seng - yakuza | 11 | 814_singapore_singapores_singaporeans_seng |
| 815 | mendoza - catronio - ferrante - welker - marisa | 11 | 815_mendoza_catronio_ferrante_welker |
| 816 | x37b - ixv - space - orbit - rocket | 11 | 816_x37b_ixv_space_orbit |
| 817 | cambodia - cambodian - phnom - penh - aple | 11 | 817_cambodia_cambodian_phnom_penh |
| 818 | payment - zapp - wallet - mobile - looppay | 11 | 818_payment_zapp_wallet_mobile |
| 819 | muslims - husain - rupertsfault - ansari - muslim | 11 | 819_muslims_husain_rupertsfault_ansari |
| 820 | ravi - clementi - clementis - rutgers - webcam | 11 | 820_ravi_clementi_clementis_rutgers |
| 821 | prayer - freshwater - baptists - pledge - school | 11 | 821_prayer_freshwater_baptists_pledge |
| 822 | workout - plitt - fitness - tabata - norton | 11 | 822_workout_plitt_fitness_tabata |
| 823 | council - shiel - land - bunting - flowerbed | 11 | 823_council_shiel_land_bunting |
| 824 | christie - mcauliffe - cuccinelli - sarvis - booker | 11 | 824_christie_mcauliffe_cuccinelli_sarvis |
| 825 | comoros - moroni - yemenia - airbus - nadhoim | 11 | 825_comoros_moroni_yemenia_airbus |
| 826 | inapp - purchase - purchases - apple - refund | 11 | 826_inapp_purchase_purchases_apple |
| 827 | sewer - fatberg - blockage - wipe - fatbergs | 11 | 827_sewer_fatberg_blockage_wipe |
| 828 | helmet - hitchbot - ultrabike - plixi - bike | 11 | 828_helmet_hitchbot_ultrabike_plixi |
| 829 | havel - czech - prague - wenceslas - pragues | 11 | 829_havel_czech_prague_wenceslas |
| 830 | shereka - dartford - milby - scene - shot | 11 | 830_shereka_dartford_milby_scene |
| 831 | nobel - prize - steinman - gurdon - beutler | 11 | 831_nobel_prize_steinman_gurdon |
| 832 | teresa - giudice - giudices - joe - housewives | 11 | 832_teresa_giudice_giudices_joe |
| 833 | enfarinats - jarramplas - els - festival - ibi | 11 | 833_enfarinats_jarramplas_els_festival |
| 834 | squirrel - grey - squirrels - albino - red | 11 | 834_squirrel_grey_squirrels_albino |
| 835 | nio - heatwaves - warmest - temperature - el | 11 | 835_nio_heatwaves_warmest_temperature |
| 836 | vacation - hohlbaum - holiday - goodman - worker | 11 | 836_vacation_hohlbaum_holiday_goodman |
| 837 | kody - polygamy - wives - meri - robyn | 11 | 837_kody_polygamy_wives_meri |
| 838 | shoe - heel - shoes - uform - hassell | 11 | 838_shoe_heel_shoes_uform |
| 839 | coin - mint - coins - 1933 - minted | 11 | 839_coin_mint_coins_1933 |
| 840 | hong - kong - kongs - shui - kuek | 11 | 840_hong_kong_kongs_shui |
| 841 | nguyen - meitiv - nguyens - cancer - stilley | 11 | 841_nguyen_meitiv_nguyens_cancer |
| 842 | oil - isis - baiji - iraq - kurdistan | 10 | 842_oil_isis_baiji_iraq |
| 843 | skull - charie - lupak - luptak - graystock | 10 | 843_skull_charie_lupak_luptak |
| 844 | plant - greenhouse - mars - space - grow | 10 | 844_plant_greenhouse_mars_space |
| 845 | tree - branch - kew - toeppe - mcnulty | 10 | 845_tree_branch_kew_toeppe |
| 846 | ikea - ikeas - furniture - kamprad - fsc | 10 | 846_ikea_ikeas_furniture_kamprad |
| 847 | lincoln - lincolns - kunhardt - gettysburg - abraham | 10 | 847_lincoln_lincolns_kunhardt_gettysburg |
| 848 | ramsey - jonbenet - ramseys - patsy - boulder | 10 | 848_ramsey_jonbenet_ramseys_patsy |
| 849 | cjd - vcjd - prion - disease - cow | 10 | 849_cjd_vcjd_prion_disease |
| 850 | chemical - vocs - mattress - bpa - perchlorate | 10 | 850_chemical_vocs_mattress_bpa |
| 851 | basescu - ponta - romanias - nastase - traian | 10 | 851_basescu_ponta_romanias_nastase |
| 852 | cloud - clouds - lenticular - mammatus - nacreous | 10 | 852_cloud_clouds_lenticular_mammatus |
| 853 | rizzi - dog - doxy - flight - attendant | 10 | 853_rizzi_dog_doxy_flight |
| 854 | laser - railgun - weapon - electromagnetic - beam | 10 | 854_laser_railgun_weapon_electromagnetic |
| 855 | rigby - rigbys - fusilier - drummer - woolwich | 10 | 855_rigby_rigbys_fusilier_drummer |
| 856 | bobo - adams - bobos - autry - holly | 10 | 856_bobo_adams_bobos_autry |
| 857 | biofluorescence - socotra - ultraviolet - light - uv | 10 | 857_biofluorescence_socotra_ultraviolet_light |
| 858 | lambie - burqa - senator - jacqui - abdo | 10 | 858_lambie_burqa_senator_jacqui |
| 859 | laden - bin - qaeda - al - ladens | 10 | 859_laden_bin_qaeda_al |
| 860 | dog - pifas - kehnast - diablo - jess | 10 | 860_dog_pifas_kehnast_diablo |
| 861 | cathedral - pauls - tent - protester - camp | 10 | 861_cathedral_pauls_tent_protester |
| 862 | huguely - lacrosse - huguelys - yeardley - loves | 10 | 862_huguely_lacrosse_huguelys_yeardley |
| 863 | pakistan - afridi - shahzad - bin - pakistani | 10 | 863_pakistan_afridi_shahzad_bin |
| 864 | mandate - supreme - subsidy - individual - law | 10 | 864_mandate_supreme_subsidy_individual |
| 865 | hawking - als - mnd - hawkings - disease | 10 | 865_hawking_als_mnd_hawkings |
| 866 | clarkson - plate - gear - fkl - h982 | 10 | 866_clarkson_plate_gear_fkl |
| 867 | hostel - hotel - suite - guest - shangrila | 10 | 867_hostel_hotel_suite_guest |
| 868 | jordan - amman - arab - jordans - jordanian | 10 | 868_jordan_amman_arab_jordans |
| 869 | percival - kaufenberg - barnes - chrzaszcz - tonks | 10 | 869_percival_kaufenberg_barnes_chrzaszcz |
| 870 | teeth - whitening - brush - toothbrush - brushing | 10 | 870_teeth_whitening_brush_toothbrush |
| 871 | ranta - comers - comer - chaskel - deacon | 10 | 871_ranta_comers_comer_chaskel |
| 872 | derailed - train - lacmegantic - derailment - burkhardt | 10 | 872_derailed_train_lacmegantic_derailment |
| 873 | jessa - duggars - duggar - guthrie - savannah | 10 | 873_jessa_duggars_duggar_guthrie |
| 874 | expectancy - centenarian - older - index - agewatch | 10 | 874_expectancy_centenarian_older_index |
| 875 | buffett - berkshire - britt - buffetts - hathaway | 10 | 875_buffett_berkshire_britt_buffetts |
| 876 | smell - scent - odour - deodorant - sulphide | 10 | 876_smell_scent_odour_deodorant |
| 877 | sivia - mcrae - rawlings - knotweed - sison | 10 | 877_sivia_mcrae_rawlings_knotweed |
| 878 | poverty - economic - wage - mcpherson - income | 10 | 878_poverty_economic_wage_mcpherson |
| 879 | note - bank - lews - signature - lew | 10 | 879_note_bank_lews_signature |
| 880 | oktoberfest - beer - polizzi - festival - collodi | 10 | 880_oktoberfest_beer_polizzi_festival |
| 881 | abe - japans - kan - japan - hatoyama | 10 | 881_abe_japans_kan_japan |
| 882 | nahla - halle - aubry - berry - gabriel | 9 | 882_nahla_halle_aubry_berry |
| 883 | syria - congress - chemical - kerry - obama | 9 | 883_syria_congress_chemical_kerry |
| 884 | bush - methodist - barbara - houston - hw | 9 | 884_bush_methodist_barbara_houston |
| 885 | plague - bubonic - madagascar - flea - locust | 9 | 885_plague_bubonic_madagascar_flea |
| 886 | simon - sinitta - bergantz - factor - niall | 9 | 886_simon_sinitta_bergantz_factor |
| 887 | affirmative - admission - diversity - supreme - fisher | 9 | 887_affirmative_admission_diversity_supreme |
| 888 | minot - levee - river - souris - dakota | 9 | 888_minot_levee_river_souris |
| 889 | jihadi - slee - bary - souaan - syria | 9 | 889_jihadi_slee_bary_souaan |
| 890 | lightning - bolt - struck - thunderstorm - strike | 9 | 890_lightning_bolt_struck_thunderstorm |
| 891 | farrow - allen - woody - ronan - dylan | 9 | 891_farrow_allen_woody_ronan |
| 892 | brothel - trafficking - slavery - bello - juju | 9 | 892_brothel_trafficking_slavery_bello |
| 893 | weightlifting - powerlifting - lifting - lift - bronwyn | 9 | 893_weightlifting_powerlifting_lifting_lift |
| 894 | stanford - stanfords - wasendorf - antigua - financier | 9 | 894_stanford_stanfords_wasendorf_antigua |
| 895 | soca - private - list - vaz - greymans | 9 | 895_soca_private_list_vaz |
| 896 | weight - diet - watchers - calorie - dieter | 9 | 896_weight_diet_watchers_calorie |
| 897 | boj - yen - japans - japan - ghosn | 9 | 897_boj_yen_japans_japan |
| 898 | bercow - mills - commons - clerk - parliamentary | 9 | 898_bercow_mills_commons_clerk |
| 899 | cave - limbert - bridge - doong - cavers | 9 | 899_cave_limbert_bridge_doong |
| 900 | monteith - cory - glee - monteiths - lea | 9 | 900_monteith_cory_glee_monteiths |
| 901 | dangi - kulkarni - guinness - mruga - tall | 9 | 901_dangi_kulkarni_guinness_mruga |
| 902 | sony - yen - sonys - hirai - electronics | 9 | 902_sony_yen_sonys_hirai |
| 903 | bat - owl - chaffinch - bird - puffin | 9 | 903_bat_owl_chaffinch_bird |
| 904 | yelland - wyverstone - cregan - firearm - weapon | 9 | 904_yelland_wyverstone_cregan_firearm |
| 905 | radiation - mobile - phone - cancer - tawkon | 9 | 905_radiation_mobile_phone_cancer |
| 906 | windslowe - silicone - aderotimi - injection - glue | 9 | 906_windslowe_silicone_aderotimi_injection |
| 907 | ice - kungur - harbin - trolltunga - cave | 9 | 907_ice_kungur_harbin_trolltunga |
| 908 | driver - driving - speeding - redspeed - motorist | 9 | 908_driver_driving_speeding_redspeed |
| 909 | berlin - nobel - prize - obama - opcw | 9 | 909_berlin_nobel_prize_obama |
| 910 | harrismoore - harrismoores - colton - barefoot - bandit | 9 | 910_harrismoore_harrismoores_colton_barefoot |
| 911 | napoleon - waterloo - bonaparte - napoleons - wellington | 9 | 911_napoleon_waterloo_bonaparte_napoleons |
| 912 | dimon - jpmorgan - bonus - bank - mf | 9 | 912_dimon_jpmorgan_bonus_bank |
| 913 | phubbing - checking - email - lunch - phone | 9 | 913_phubbing_checking_email_lunch |
| 914 | darwin - springthorpe - lehan - tepper - fraser | 9 | 914_darwin_springthorpe_lehan_tepper |
| 915 | cort - nursery - stowe - tudur - methley | 8 | 915_cort_nursery_stowe_tudur |
| 916 | massage - bains - yeoh - pytlarz - abusin | 8 | 916_massage_bains_yeoh_pytlarz |
| 917 | nuisance - landline - calls - text - bt6500 | 8 | 917_nuisance_landline_calls_text |
| 918 | sopa - piracy - pipa - internet - reddit | 8 | 918_sopa_piracy_pipa_internet |
| 919 | peterson - savio - stacy - savios - petersons | 8 | 919_peterson_savio_stacy_savios |
| 920 | aig - bailout - bonus - fannie - lending | 8 | 920_aig_bailout_bonus_fannie |
| 921 | lamma - ferry - hong - kong - vessel | 8 | 921_lamma_ferry_hong_kong |
| 922 | ramsay - hutcheson - chef - randle - tana | 8 | 922_ramsay_hutcheson_chef_randle |
| 923 | meth - methamphetamine - crystal - drug - breaking | 8 | 923_meth_methamphetamine_crystal_drug |
| 924 | perry - perrys - texas - governor - oops | 8 | 924_perry_perrys_texas_governor |
| 925 | lunar - moon - rover - rabbit - moons | 8 | 925_lunar_moon_rover_rabbit |
| 926 | avon - careerbuildercom - volunteering - earn - job | 8 | 926_avon_careerbuildercom_volunteering_earn |
| 927 | vick - vicks - falcons - nfl - dogfighting | 8 | 927_vick_vicks_falcons_nfl |
| 928 | polish - kaczynski - katyn - poland - kaczynskis | 8 | 928_polish_kaczynski_katyn_poland |
| 929 | search - warrantless - cell - fakhoury - phone | 8 | 929_search_warrantless_cell_fakhoury |
| 930 | goldman - sean - brazilian - bruna - custody | 8 | 930_goldman_sean_brazilian_bruna |
| 931 | circus - performer - ringling - barnum - providence | 8 | 931_circus_performer_ringling_barnum |
| 932 | greaves - saviours - organist - maureen - sheffield | 8 | 932_greaves_saviours_organist_maureen |
| 933 | suleman - octuplets - nadya - kamrava - octomom | 8 | 933_suleman_octuplets_nadya_kamrava |
| 934 | pumpkin - grower - hedge - neale - record | 8 | 934_pumpkin_grower_hedge_neale |
| 935 | shafilea - ahmed - shafia - mevish - badiuzzaman | 8 | 935_shafilea_ahmed_shafia_mevish |
| 936 | contostavlos - tulisa - varey - glc - morgan | 8 | 936_contostavlos_tulisa_varey_glc |
| 937 | xinhua - kui - li - chinese - zhou | 8 | 937_xinhua_kui_li_chinese |
| 938 | stress - cft - meditation - anxiety - depression | 8 | 938_stress_cft_meditation_anxiety |
| 939 | maoist - chhattisgarh - singh - raipur - maoists | 8 | 939_maoist_chhattisgarh_singh_raipur |
| 940 | tyrell - william - kendall - spedding - nsw | 8 | 940_tyrell_william_kendall_spedding |
| 941 | valle - valles - gilberto - hise - kidnap | 8 | 941_valle_valles_gilberto_hise |
| 942 | bunker - silo - underground - missile - dring | 8 | 942_bunker_silo_underground_missile |
| 943 | nhs - redundancy - payoffs - rehired - redundant | 8 | 943_nhs_redundancy_payoffs_rehired |
| 944 | blackwater - waxman - iraqi - xe - iraq | 8 | 944_blackwater_waxman_iraqi_xe |
| 945 | detroit - grosse - flint - detroits - pointe | 8 | 945_detroit_grosse_flint_detroits |
| 946 | thames - water - meter - sewage - aman | 8 | 946_thames_water_meter_sewage |
| 947 | archbishop - welby - canterbury - church - christianity | 8 | 947_archbishop_welby_canterbury_church |
| 948 | clews - caviar - vegan - wagyu - sevruga | 8 | 948_clews_caviar_vegan_wagyu |
| 949 | santa - elf - christmas - santas - lily | 7 | 949_santa_elf_christmas_santas |
| 950 | poverty - appalachia - census - appalachian - bureau | 7 | 950_poverty_appalachia_census_appalachian |
| 951 | waddington - dean - bishop - archbishop - church | 7 | 951_waddington_dean_bishop_archbishop |
| 952 | psy - gangnam - psys - snoop - youtube | 7 | 952_psy_gangnam_psys_snoop |
| 953 | methylamphetamine - methamphetamine - kilogram - australian - meth | 7 | 953_methylamphetamine_methamphetamine_kilogram_australian |
| 954 | savernake - beesley - earl - trustee - farndale | 7 | 954_savernake_beesley_earl_trustee |
| 955 | confinement - solitary - bullock - church - melton | 7 | 955_confinement_solitary_bullock_church |
| 956 | tia - hazell - tias - bridger - sharp | 7 | 956_tia_hazell_tias_bridger |
| 957 | compensation - poliuscurran - teacher - aciro - academies | 7 | 957_compensation_poliuscurran_teacher_aciro |
| 958 | theatre - lambros - cinema - purim - dangour | 7 | 958_theatre_lambros_cinema_purim |
| 959 | belfort - belforts - copperfield - wolf - oakmont | 7 | 959_belfort_belforts_copperfield_wolf |
| 960 | goode - francis - girl - boy - yarlington | 7 | 960_goode_francis_girl_boy |
| 961 | stevens - benghazi - libya - libyan - embassy | 7 | 961_stevens_benghazi_libya_libyan |
| 962 | boness - cv - tooth - pluss - job | 7 | 962_boness_cv_tooth_pluss |
| 963 | malaria - parasite - eradication - vaccine - mosquito | 7 | 963_malaria_parasite_eradication_vaccine |
| 964 | space - chinas - aerobatic - china - shenzhou10 | 7 | 964_space_chinas_aerobatic_china |
| 965 | cycle - superhighway - railway - route - london | 7 | 965_cycle_superhighway_railway_route |
| 966 | corset - penny - ruffinelli - hips - goddiva | 7 | 966_corset_penny_ruffinelli_hips |
| 967 | alkhansa - raqqa - islamic - hrw - suha | 7 | 967_alkhansa_raqqa_islamic_hrw |
| 968 | taveras - reyes - alvarado - leopoldo - quintanilla | 7 | 968_taveras_reyes_alvarado_leopoldo |
| 969 | bulb - leds - paddle - edisons - bulbs | 7 | 969_bulb_leds_paddle_edisons |
| 970 | goodman - goodmans - wilsons - hutchins - polo | 7 | 970_goodman_goodmans_wilsons_hutchins |
| 971 | worboys - ruse - ryn - kelcher - nbv | 7 | 971_worboys_ruse_ryn_kelcher |
| 972 | humanpowered - aircraft - rotor - skyprowler - efan | 7 | 972_humanpowered_aircraft_rotor_skyprowler |
| 973 | elvis - presley - graceland - presleys - elviss | 7 | 973_elvis_presley_graceland_presleys |
| 974 | council - tax - councils - pickles - allowance | 7 | 974_council_tax_councils_pickles |
| 975 | ferrante - cyanide - klein - creatine - spears | 7 | 975_ferrante_cyanide_klein_creatine |
| 976 | translation - hawaiian - language - donaghy - translate | 7 | 976_translation_hawaiian_language_donaghy |
| 977 | ear - sherrie - cartilage - surgery - charlotte | 7 | 977_ear_sherrie_cartilage_surgery |
| 978 | rizzo - salary - bell - spaccia - city | 7 | 978_rizzo_salary_bell_spaccia |
| 979 | berlin - aretz - freeman - 1989 - german | 7 | 979_berlin_aretz_freeman_1989 |
| 980 | pothole - council - road - potholes - lane | 7 | 980_pothole_council_road_potholes |
| 981 | podesta - bush - rating - percent - poll | 7 | 981_podesta_bush_rating_percent |
| 982 | sata - mutharika - banda - malawi - zambian | 7 | 982_sata_mutharika_banda_malawi |
| 983 | grimm - grimms - scotto - durand - congressman | 7 | 983_grimm_grimms_scotto_durand |
| 984 | barge - sancoff - google - susitna - borough | 7 | 984_barge_sancoff_google_susitna |
| 985 | draper - dobson - gentles - car - webb | 7 | 985_draper_dobson_gentles_car |
| 986 | fawcett - oneal - farrah - fawcetts - warhol | 7 | 986_fawcett_oneal_farrah_fawcetts |
| 987 | diamond - beers - diamonds - cullinan - mine | 7 | 987_diamond_beers_diamonds_cullinan |
| 988 | koralewski - pilkington - blackwell - care - siobhan | 7 | 988_koralewski_pilkington_blackwell_care |
| 989 | hundley - bennett - jonah - rickey - shein | 7 | 989_hundley_bennett_jonah_rickey |
| 990 | demi - ashton - demis - grammer - bure | 7 | 990_demi_ashton_demis_grammer |
| 991 | christmas - wanner - johann - mulled - market | 7 | 991_christmas_wanner_johann_mulled |
| 992 | connelly - barker - tracey - haringey - owen | 6 | 992_connelly_barker_tracey_haringey |
| 993 | righttowork - union - unionism - embryonic - lamberth | 6 | 993_righttowork_union_unionism_embryonic |
| 994 | mcchrystal - fallon - petraeus - mcchrystals - gates | 6 | 994_mcchrystal_fallon_petraeus_mcchrystals |
| 995 | nellore - railway - delhi - carriage - india | 6 | 995_nellore_railway_delhi_carriage |
| 996 | ramos - blasio - bratton - liu - wenjian | 6 | 996_ramos_blasio_bratton_liu |
| 997 | biofuel - biofuels - ethanol - biochar - gasoline | 6 | 997_biofuel_biofuels_ethanol_biochar |
| 998 | graphene - hydrogen - atom - nanodiamonds - membrane | 6 | 998_graphene_hydrogen_atom_nanodiamonds |
| 999 | rico - gang - puerto - homicide - crime | 6 | 999_rico_gang_puerto_homicide |
| 1000 | nigeria - jonathan - nigerias - election - nigerians | 6 | 1000_nigeria_jonathan_nigerias_election |
| 1001 | oxygen - membrane - rock - noffke - bacteria | 6 | 1001_oxygen_membrane_rock_noffke |
| 1002 | mouse - gavage - soulard - crueltyfree - foie | 6 | 1002_mouse_gavage_soulard_crueltyfree |
| 1003 | idol - lopez - kinane - finale - franco | 6 | 1003_idol_lopez_kinane_finale |
| 1004 | iraqi - almaliki - iraq - iraqs - alabadi | 6 | 1004_iraqi_almaliki_iraq_iraqs |
| 1005 | colwell - flag - australian - indigenous - racist | 6 | 1005_colwell_flag_australian_indigenous |
| 1006 | feonyx - kaydon - sullock - nappy - cot | 6 | 1006_feonyx_kaydon_sullock_nappy |
| 1007 | belhadj - gaddafi - straw - blair - rendition | 6 | 1007_belhadj_gaddafi_straw_blair |
| 1008 | leave - parental - childcare - schroeders - scatty | 6 | 1008_leave_parental_childcare_schroeders |
| 1009 | wormhole - photon - teleportation - relativity - warp | 6 | 1009_wormhole_photon_teleportation_relativity |
| 1010 | nelson - knight - pearson - fired - knights | 6 | 1010_nelson_knight_pearson_fired |
| 1011 | toilet - jaeduck - symonds - toiletshaped - warriors | 6 | 1011_toilet_jaeduck_symonds_toiletshaped |
| 1012 | calorie - sauce - calories - chopped - protein | 6 | 1012_calorie_sauce_calories_chopped |
| 1013 | arin - jorgensen - christine - gender - tiffany | 6 | 1013_arin_jorgensen_christine_gender |
| 1014 | lewinsky - clinton - clintons - monica - lewinskys | 6 | 1014_lewinsky_clinton_clintons_monica |
| 1015 | bundy - cliven - bundys - cattle - rancher | 6 | 1015_bundy_cliven_bundys_cattle |
| 1016 | trentadue - bombing - kaczynski - mcveigh - oklahoma | 6 | 1016_trentadue_bombing_kaczynski_mcveigh |
| 1017 | bell - malden - clarence - hamzahs - sennett | 6 | 1017_bell_malden_clarence_hamzahs |
| 1018 | valentines - cuddle - hess - romantic - cuddling | 6 | 1018_valentines_cuddle_hess_romantic |
| 1019 | hockey - eruzione - suter - selanne - finland | 6 | 1019_hockey_eruzione_suter_selanne |
| 1020 | maps - apple - apples - forstall - google | 6 | 1020_maps_apple_apples_forstall |
| 1021 | bonfire - lewes - effigy - fawkes - sussex | 6 | 1021_bonfire_lewes_effigy_fawkes |
| 1022 | cunningham - josie - boob - 4800 - wannabe | 6 | 1022_cunningham_josie_boob_4800 |
| 1023 | khloe - timeless - syms - kardashian - thickes | 6 | 1023_khloe_timeless_syms_kardashian |
| 1024 | malika - hinksman - daisyray - bath - geoff | 6 | 1024_malika_hinksman_daisyray_bath |
| 1025 | saffron - ruhleben - horticultural - garden - flower | 5 | 1025_saffron_ruhleben_horticultural_garden |
| 1026 | grant - tina - hong - hugh - landon | 5 | 1026_grant_tina_hong_hugh |
| 1027 | deeds - gus - creigh - virginia - millboro | 5 | 1027_deeds_gus_creigh_virginia |
| 1028 | miliband - naftali - syria - labour - bso | 5 | 1028_miliband_naftali_syria_labour |
| 1029 | palin - johnston - bristol - palins - king | 5 | 1029_palin_johnston_bristol_palins |
| 1030 | teen - pregnancy - birth - kearney - unplanned | 5 | 1030_teen_pregnancy_birth_kearney |
| 1031 | ocd - scrupulosity - abramowitz - hoarding - paperchase | 5 | 1031_ocd_scrupulosity_abramowitz_hoarding |
| 1032 | drinkdrive - grandparent - fatality - safer - rural | 5 | 1032_drinkdrive_grandparent_fatality_safer |
| 1033 | pirabahuran - shopkeeper - hennessy - robber - hennesy | 5 | 1033_pirabahuran_shopkeeper_hennessy_robber |
| 1034 | hair - shaunni - parsons - kiera - mackenzie | 5 | 1034_hair_shaunni_parsons_kiera |
| 1035 | plastic - ocean - plastiki - gyre - trash | 5 | 1035_plastic_ocean_plastiki_gyre |
| 1036 | nicholson - nielsen - connie - award - outstanding | 5 | 1036_nicholson_nielsen_connie_award |
| 1037 | google - search - ruling - engine - results | 5 | 1037_google_search_ruling_engine |
| 1038 | iraqi - troop - iraq - iraqs - almaliki | 5 | 1038_iraqi_troop_iraq_iraqs |
| 1039 | pozonsky - oswald - meth - fresno - walmart | 5 | 1039_pozonsky_oswald_meth_fresno |
| 1040 | watts - crib - kumpula - baby - kimpton | 5 | 1040_watts_crib_kumpula_baby |
| 1041 | amnesty - shetty - human - aceves - gualinga | 5 | 1041_amnesty_shetty_human_aceves |
| 1042 | naji - guantanamo - alshibh - aamer - aamers | 5 | 1042_naji_guantanamo_alshibh_aamer |
| 1043 | sanford - sanfords - jenny - carolina - governors | 5 | 1043_sanford_sanfords_jenny_carolina |
| 1044 | vallance - lowe - jesperson - chang - wiggins | 5 | 1044_vallance_lowe_jesperson_chang |
| 1045 | collapse - building - construction - collapsed - canacona | 5 | 1045_collapse_building_construction_collapsed |
| 1046 | gift - rachel - christmas - present - gates | 5 | 1046_gift_rachel_christmas_present |
| 1047 | tudor - henry - asprey - tudors - wolf | 5 | 1047_tudor_henry_asprey_tudors |
| 1048 | breastfeeding - formula - milk - breastfeed - breastfed | 5 | 1048_breastfeeding_formula_milk_breastfeed |
| 1049 | costume - headdress - macklemore - stereotype - halloween | 5 | 1049_costume_headdress_macklemore_stereotype |
| 1050 | iraq - isil - syria - strike - air | 5 | 1050_iraq_isil_syria_strike |
| 1051 | contraception - pill - sterilization - armor - gauchat | 5 | 1051_contraception_pill_sterilization_armor |
| 1052 | chao - gigi - eav - sean - cecil | 5 | 1052_chao_gigi_eav_sean |
| 1053 | exercise - activity - walking - brisk - heart | 5 | 1053_exercise_activity_walking_brisk |
| 1054 | hygiene - tesco - earwig - store - supermarket | 5 | 1054_hygiene_tesco_earwig_store |
| 1055 | shakespeare - shakespeares - rylance - folio - shylock | 5 | 1055_shakespeare_shakespeares_rylance_folio |
| 1056 | hightower - nace - demaio - zilge - dingess | 5 | 1056_hightower_nace_demaio_zilge |
| 1057 | dubai - mulla - calcutt - robert - uae | 5 | 1057_dubai_mulla_calcutt_robert |
| 1058 | wikileaks - assange - classified - gates - manning | 5 | 1058_wikileaks_assange_classified_gates |
| 1059 | dubai - burj - 971 - dubais - al | 5 | 1059_dubai_burj_971_dubais |
| 1060 | schall - cdu - germany - npd - german | 5 | 1060_schall_cdu_germany_npd |
| 1061 | allotment - garden - brana - plot - cummins | 5 | 1061_allotment_garden_brana_plot |
| 1062 | lopez - hood - fort - owens - muntean | 5 | 1062_lopez_hood_fort_owens |
| 1063 | hansen - conklin - hansens - kk - waikiki | 5 | 1063_hansen_conklin_hansens_kk |
| 1064 | splashlight - beauty - profile - poppy - colour | 5 | 1064_splashlight_beauty_profile_poppy |
| 1065 | ua - slough - thermal - outbuilding - council | 5 | 1065_ua_slough_thermal_outbuilding |
| 1066 | hribal - murrain - hribals - kalmbach - thomassey | 5 | 1066_hribal_murrain_hribals_kalmbach |
| 1067 | ktf - kermit - muppets - sesame - muppet | 5 | 1067_ktf_kermit_muppets_sesame |
| 1068 | beamond - haslemere - halliwell - goddenedwards - beamonds | 5 | 1068_beamond_haslemere_halliwell_goddenedwards |
| 1069 | tinder - dating - rad - ignighter - antidate | 5 | 1069_tinder_dating_rad_ignighter |
| 1070 | robot - robots - autonomous - 1939 - artificial | 5 | 1070_robot_robots_autonomous_1939 |
| 1071 | markoff - brisman - conley - markoffs - julissa | 5 | 1071_markoff_brisman_conley_markoffs |
| 1072 | port - almasry - cairo - alahly - egypts | 5 | 1072_port_almasry_cairo_alahly |
| 1073 | casquejo - tower - skyscraper - basher - wtc | 5 | 1073_casquejo_tower_skyscraper_basher |
| 1074 | lodger - grownup - average - class - rent | 5 | 1074_lodger_grownup_average_class |
| 1075 | palace - royal - buckingham - queen - savoir | 5 | 1075_palace_royal_buckingham_queen |
| 1076 | tokyo - shibuya - marunouchi - akihabara - nakagin | 5 | 1076_tokyo_shibuya_marunouchi_akihabara |
| 1077 | food - trussell - bank - voucher - rayner | 5 | 1077_food_trussell_bank_voucher |
| 1078 | berg - bush - jenna - wilson - library | 5 | 1078_berg_bush_jenna_wilson |
| 1079 | hmrc - owe - accountants - chartered - tax | 5 | 1079_hmrc_owe_accountants_chartered |
| 1080 | disability - benefit - dla - claimant - allowance | 5 | 1080_disability_benefit_dla_claimant |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.15.0
* Python: 3.10.12
|
lsaulier/q-Taxi-v3 | lsaulier | "2022-12-18T15:19:32Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-18T15:16:00Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lsaulier/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mlifecrisis/dais | mlifecrisis | "2025-01-03T18:55:56Z" | 32 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-03T18:13:01Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: dais
---
# Dais
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `dais` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mlifecrisis/dais', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ahenkel/opt-6.7b-lora | ahenkel | "2024-04-06T18:19:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-06T18:19:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
allydennisse/my_awesome_qa_model | allydennisse | "2024-06-16T13:42:10Z" | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-06-15T21:58:57Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: allydennisse/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# allydennisse/my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8973
- Validation Loss: 1.7602
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5013 | 2.1723 | 0 |
| 1.8973 | 1.7602 | 1 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
EarthnDusk/Poltergeist-Illustration | EarthnDusk | "2023-06-19T06:47:45Z" | 28 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable diffusion",
"anime",
"comic book",
"mix",
"merge",
"text-to-image",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:Nerfgun3/bad_prompt",
"dataset:Duskfallcrew/remydataset",
"dataset:basesssssp/bad-kemono-negative-embedding",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-06-19T00:44:23Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- stable diffusion
- diffusers
- anime
- comic book
- mix
- merge
datasets:
- fka/awesome-chatgpt-prompts
- Nerfgun3/bad_prompt
- Duskfallcrew/remydataset
- basesssssp/bad-kemono-negative-embedding
pipeline_tag: text-to-image
---
## POODA-BEEP!
This is censored language for POLTERBITCH aka Poltergeist.
It's an in house nod to some of our alter's truths, and it's kind of a joke for Beetlejuice fans.
THIS IS AN ILLUSTRATION - COMIC MIX and there's several versions of this and you'll note that there is only ONE DEMO SPACE FOR IT ON HF so far - but give us time we're working on it!
---
## We are partly sponsored by Pirate diffusion, and are waiting on links and images.
REQUESTS FOR LORAS AN MERGES: https://forms.gle/tAgb8RsC8mf1scV48
---
## HOW TO SUPPORT US:
Join our Reddit: https://www.reddit.com/r/earthndusk/
If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US: https://discord.gg/5t2kYxt7An
Listen to the music that we've made that goes with our art: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38
We stream a lot of our testing on twitch: https://www.twitch.tv/duskfallcrew
any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew
If SOMEHOW CIVIT ISN'T WORKING WE WILL ALWAYS HAVE A BACKUP: https://huggingface.co/Duskfallcrew/
Submit SFW and amazing wallpapers at PaprGG: https://discord.gg/2UXkGwndVE
---
## STOP
We used AFTER DETAILER AND HI RES FIX.
Upscaler Choices:
https://huggingface.co/uwg/upscaler/tree/main/ESRGAN
Vae Alternatives:
https://huggingface.co/datasets/VASVASVAS/vae
---
## MIX BREAK DOWN
This is literally as far as I can tell just poodabeep and epic v4 with lora binding.
Because it's a comic book style model, it likely has MANY of Lykon's comic loras in it.
Marvels and Dungeons
Largely lost the OG list for it...
https://civitai.com/models/71404/mcbs-machinecodes-comic-book-style
plus more iComix
and Duel Comic Strike
---
## LEGAL RESPONSIBILITY DOWNSTREAM
You are legally respoonsible for YOUR use of this model and it's downstream uses. We highly request you don't do anything illegal, morally incorrect or anything that goes against the Creative Open-Rail M details.
You are FREE to add this to GENERATION sites as long as you link back to the civit AI page here: https://civitai.com/models/27096/epic-mix-v4
You are WITHIN reason allowed to make commercial use images, but as always check your local laws, and of course - do not use this to create works that denote that it is NOT Ai generative media. If you're working on a larger project please consider supporting us financially. |
vertings6/d0eb654e-7112-45be-b508-69fcd96039b2 | vertings6 | "2025-01-22T22:25:01Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b",
"base_model:adapter:unsloth/codegemma-7b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T21:34:58Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0eb654e-7112-45be-b508-69fcd96039b2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c8ef13cf21c962ee_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c8ef13cf21c962ee_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vertings6/d0eb654e-7112-45be-b508-69fcd96039b2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/c8ef13cf21c962ee_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6af1c2d7-d826-43c1-a963-5c6a4303f07b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6af1c2d7-d826-43c1-a963-5c6a4303f07b
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# d0eb654e-7112-45be-b508-69fcd96039b2
This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 4.1272 |
| 3.8704 | 0.0017 | 5 | 4.0203 |
| 3.8551 | 0.0034 | 10 | 3.9657 |
| 3.8742 | 0.0051 | 15 | 3.9560 |
| 3.8867 | 0.0068 | 20 | 3.9539 |
| 3.8972 | 0.0085 | 25 | 3.9271 |
| 4.0733 | 0.0101 | 30 | 3.9157 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Amaudel/amaudel | Amaudel | "2025-02-20T16:25:02Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-02-20T15:41:32Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
m-a-p/MusiLingo-short-v1 | m-a-p | "2024-09-13T07:29:21Z" | 18 | 2 | transformers | [
"transformers",
"safetensors",
"musilingo",
"feature-extraction",
"music",
"art",
"custom_code",
"en",
"arxiv:2309.08730",
"license:cc-by-4.0",
"region:us"
] | feature-extraction | "2024-04-04T10:16:03Z" | ---
language:
- en
license: cc-by-4.0
tags:
- music
- art
---
# Model Card for Model ID
## Model Details
### Model Description
The model consists of a music encoder ```MERT-v1-300M```, a natural language decoder ```vicuna-7b-delta-v0```, and a linear projection laer between the two.
This checkpoint of MusiLingo is developed on the MusicInstruct (MI)-short and can answer short instructions with music raw audio, such as querying about the tempo, emotion, genre, tags information. You can use the [MI](https://huggingface.co/datasets/m-a-p/Music-Instruct) dataset for the following demo
### Model Sources [optional]
- **Repository:** [GitHub repo](https://github.com/zihaod/MusiLingo)
- **Paper [optional]:** __[MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response](https://arxiv.org/abs/2309.08730)__
<!-- - **Demo [optional]:** [More Information Needed] -->
## Getting Start
```
from tqdm.auto import tqdm
import torch
from torch.utils.data import DataLoader
from transformers import Wav2Vec2FeatureExtractor
from transformers import StoppingCriteria, StoppingCriteriaList
class StoppingCriteriaSub(StoppingCriteria):
def __init__(self, stops=[], encounters=1):
super().__init__()
self.stops = stops
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
for stop in self.stops:
if torch.all((stop == input_ids[0][-len(stop):])).item():
return True
return False
def get_musilingo_pred(model, text, audio_path, stopping, length_penalty=1, temperature=0.1,
max_new_tokens=300, num_beams=1, min_length=1, top_p=0.5, repetition_penalty=1.0):
# see https://huggingface.co/m-a-p/MusiLingo-musicqa-v1 for load_audio function definition
audio = load_audio(audio_path, target_sr=24000,
is_mono=True,
is_normalize=False,
crop_to_length_in_sample_points=int(30*16000)+1,
crop_randomly=True,
pad=False).cuda()
processor = Wav2Vec2FeatureExtractor.from_pretrained("m-a-p/MERT-v1-330M",trust_remote_code=True)
audio = processor(audio,
sampling_rate=24000,
return_tensors="pt")['input_values'][0].cuda()
audio_embeds, atts_audio = model.encode_audio(audio)
prompt = '<Audio><AudioHere></Audio> ' + text
instruction_prompt = [model.prompt_template.format(prompt)]
audio_embeds, atts_audio = model.instruction_prompt_wrap(audio_embeds, atts_audio, instruction_prompt)
model.llama_tokenizer.padding_side = "right"
batch_size = audio_embeds.shape[0]
bos = torch.ones([batch_size, 1],
dtype=torch.long,
device=torch.device('cuda')) * model.llama_tokenizer.bos_token_id
bos_embeds = model.llama_model.model.embed_tokens(bos)
# atts_bos = atts_audio[:, :1]
inputs_embeds = torch.cat([bos_embeds, audio_embeds], dim=1)
# attention_mask = torch.cat([atts_bos, atts_audio], dim=1)
outputs = model.llama_model.generate(
inputs_embeds=inputs_embeds,
max_new_tokens=max_new_tokens,
stopping_criteria=stopping,
num_beams=num_beams,
do_sample=True,
min_length=min_length,
top_p=top_p,
repetition_penalty=repetition_penalty,
length_penalty=length_penalty,
temperature=temperature,
)
output_token = outputs[0]
if output_token[0] == 0: # the model might output a unknow token <unk> at the beginning. remove it
output_token = output_token[1:]
if output_token[0] == 1: # if there is a start token <s> at the beginning. remove it
output_token = output_token[1:]
output_text = model.llama_tokenizer.decode(output_token, add_special_tokens=False)
output_text = output_text.split('###')[0] # remove the stop sign '###'
output_text = output_text.split('Assistant:')[-1].strip()
return output_text
musilingo = AutoModel.from_pretrained("m-a-p/MusiLingo-short-v1", trust_remote_code=True)
musilingo.to("cuda")
musilingo.eval()
prompt = "this is the task instruction and input question for MusiLingo model"
audio = "/path/to/the/audio"
stopping = StoppingCriteriaList([StoppingCriteriaSub([torch.tensor([835]).cuda(),
torch.tensor([2277, 29937]).cuda()])])
response = get_musilingo_pred(musilingo.model, prompt, audio_path, stopping, length_penalty=100, temperature=0.1)
```
# Citing This Work
If you find the work useful for your research, please consider citing it using the following BibTeX entry:
```
@inproceedings{deng2024musilingo,
title={MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response},
author={Deng, Zihao and Ma, Yinghao and Liu, Yudong and Guo, Rongchen and Zhang, Ge and Chen, Wenhu and Huang, Wenhao and Benetos, Emmanouil},
booktitle={Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024)},
year={2024},
organization={Association for Computational Linguistics}
}
``` |
GlastonburyGroup/UKBBLatent_Cardiac_20208_DiffAE3D_L128_S2023 | GlastonburyGroup | "2024-11-06T19:47:28Z" | 8 | 0 | pytorch | [
"pytorch",
"safetensors",
"DiffAE",
"medical",
"cardiac MRI",
"MRI",
"CINE",
"dynamic MRI",
"representation learning",
"unsupervised learning",
"3D",
"diffusion",
"diffusion autoencoder",
"autoencoder",
"3D DiffAE",
"UK Biobank",
"latent space",
"image-feature-extraction",
"custom_code",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | "2024-09-19T16:16:34Z" | ---
license: apache-2.0
pipeline_tag: image-feature-extraction
tags:
- medical
- cardiac MRI
- MRI
- CINE
- dynamic MRI
- representation learning
- unsupervised learning
- 3D
- diffusion
- diffusion autoencoder
- autoencoder
- DiffAE
- 3D DiffAE
- UK Biobank
- latent space
library_name: pytorch
---
# UKBBLatent_Cardiac_20208_DiffAE3D_L128_S2023
Biobank-scale imaging provides a unique opportunity to characterise structural and functional cardiac phenotypes and how they relate to disease outcomes. However, deriving specific phenotypes from MRI data requires time-consuming expert annotation, limiting scalability and does not exploit how information dense such image acquisitions are. In this study, we applied a 3D diffusion autoencoder to temporally resolved cardiac MRI data from 71,021 UK Biobank participants to derive latent phenotypes representing the human heart in motion. These phenotypes were reproducible, heritable (h2 = [4 - 18%]), and significantly associated with cardiometabolic traits and outcomes, including atrial fibrillation (P = 8.5 × 10-29) and myocardial infarction (P = 3.7 × 10-12). By using latent space manipulation techniques, we directly interpreted and visualised what specific latent phenotypes were capturing in a given MRI.
## Model Details
During this research, the original [DiffAE](https://diff-ae.github.io/) model was adapted and extended for 3D to create the 3D DiffAE model, and was trained on the CINE Cardiac Long-axis 4-chamber view MRIs from UK Biobank dataset using 5 different seeds. This model can be used to infer latent representations from similar cardiac MRIs, or can also be used as pretrained models and then fine-tuned on other datasets or tasks.
This model can also be used to generate synthetic cardiac MRIs similar to the training set.
### Model Description
- **Model type:** 3D DiffAE
- **Task:** Obtaining latent representation from 3D input volumes
- **Training dataset:** [CINE Cardiac Long-axis 4-chamber view MRIs from UK Biobank](https://biobank.ctsu.ox.ac.uk/crystal/field.cgi?id=20208)
- **Training seed:** 2023
- **Input:** 3D MRI (2D over time), intensity normalised (min-max, followed by z-score with 0.5 mean and std)
- **Output:** 128 latent factors. Can also be used for generating synthetic MRIs.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/GlastonburyGroup/ImLatent
- **Project page:** https://glastonburygroup.github.io/CardiacDiffAE_GWAS/
- **Preprint:** https://doi.org/10.1101/2024.11.04.24316700
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this model in your research, or utilise code from this repository or the provided weights, please consider citing the following in your publications:
**BibTeX:**
```bibtex
@article{Ometto2024.11.04.24316700,
author = {Ometto, Sara and Chatterjee, Soumick and Vergani, Andrea Mario and Landini, Arianna and Sharapov, Sodbo and Giacopuzzi, Edoardo and Visconti, Alessia and Bianchi, Emanuele and Santonastaso, Federica and Soda, Emanuel M and Cisternino, Francesco and Ieva, Francesca and Di Angelantonio, Emanuele and Pirastu, Nicola and Glastonbury, Craig A},
title = {Unsupervised cardiac MRI phenotyping with 3D diffusion autoencoders reveals novel genetic insights},
elocation-id = {2024.11.04.24316700},
year = {2024},
doi = {10.1101/2024.11.04.24316700},
publisher = {Cold Spring Harbor Laboratory Press},
url = {https://www.medrxiv.org/content/early/2024/11/05/2024.11.04.24316700},
journal = {medRxiv}
}
```
**APA:**
Ometto, S., Chatterjee, S., Vergani, A. M., Landini, A., Sharapov, S., Giacopuzzi, E., … Glastonbury, C. A. (2024). Unsupervised cardiac MRI phenotyping with 3D diffusion autoencoders reveals novel genetic insights. medRxiv. doi:10.1101/2024.11.04.24316700 |
mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF | mradermacher | "2024-12-21T06:00:36Z" | 22 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Kabster/BioMistral-Zephyr-Beta-SLERP",
"base_model:quantized:Kabster/BioMistral-Zephyr-Beta-SLERP",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-19T19:32:44Z" | ---
base_model: Kabster/BioMistral-Zephyr-Beta-SLERP
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kabster/BioMistral-Zephyr-Beta-SLERP
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Zephyr-Beta-SLERP-GGUF/resolve/main/BioMistral-Zephyr-Beta-SLERP.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
firefistape/ppo-LunarLander-v2 | firefistape | "2023-05-22T07:54:38Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-22T07:54:18Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.80 +/- 25.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonatasgrosman/exp_w2v2t_it_vp-nl_s222 | jonatasgrosman | "2022-07-08T20:54:30Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-08T20:54:03Z" | ---
language:
- it
license: apache-2.0
tags:
- automatic-speech-recognition
- it
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_it_vp-nl_s222
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
oldiday/5915cb8a-30c2-46f4-9e7a-6fc2c678ddf4 | oldiday | "2025-02-10T03:15:27Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"region:us"
] | null | "2025-02-09T23:02:11Z" | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5915cb8a-30c2-46f4-9e7a-6fc2c678ddf4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 27ff9d8e7226ff01_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/27ff9d8e7226ff01_train_data.json
type:
field_input: query
field_instruction: original_question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: oldiday/5915cb8a-30c2-46f4-9e7a-6fc2c678ddf4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 600
micro_batch_size: 8
mlflow_experiment_name: /tmp/27ff9d8e7226ff01_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: d277a556-951f-4fb2-abec-72f59a682f11
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: d277a556-951f-4fb2-abec-72f59a682f11
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5915cb8a-30c2-46f4-9e7a-6fc2c678ddf4
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.8820 |
| 0.2532 | 0.0085 | 100 | 0.2473 |
| 0.2207 | 0.0171 | 200 | 0.2249 |
| 0.2161 | 0.0256 | 300 | 0.2136 |
| 0.2104 | 0.0342 | 400 | 0.2040 |
| 0.2067 | 0.0427 | 500 | 0.1992 |
| 0.2228 | 0.0512 | 600 | 0.1980 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
yurongzhong/LLMBook_distilbert-base-uncased-finetuned-emotion | yurongzhong | "2025-01-01T10:43:57Z" | 116 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:dair-ai/emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-01T09:29:03Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: LLMBook_distilbert-base-uncased-finetuned-emotion
results: []
datasets:
- dair-ai/emotion
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLMBook_distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [Emotion Dataset](https://github.com/dair-ai/emotion_dataset).
It achieves the following results on the evaluation set:
- Loss: 0.2171
- Accuracy: 0.928
- F1: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8356 | 1.0 | 250 | 0.3062 | 0.9115 | 0.9104 |
| 0.2527 | 2.0 | 500 | 0.2171 | 0.928 | 0.9279 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.3.0+cpu
- Datasets 3.2.0
- Tokenizers 0.20.3 |
Augusto777/vit-base-patch16-224-ve-U12-b-24 | Augusto777 | "2024-06-12T04:39:32Z" | 195 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-12T04:31:48Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-ve-U12-b-24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8478260869565217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U12-b-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6456
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3806 | 0.4130 |
| 1.379 | 2.0 | 13 | 1.3103 | 0.5435 |
| 1.379 | 2.92 | 19 | 1.2269 | 0.4130 |
| 1.2758 | 4.0 | 26 | 1.1412 | 0.4565 |
| 1.121 | 4.92 | 32 | 1.0650 | 0.4783 |
| 1.121 | 6.0 | 39 | 1.0084 | 0.5217 |
| 0.9871 | 6.92 | 45 | 0.9395 | 0.6522 |
| 0.8612 | 8.0 | 52 | 0.8798 | 0.7174 |
| 0.8612 | 8.92 | 58 | 0.8219 | 0.7391 |
| 0.7653 | 10.0 | 65 | 0.7712 | 0.7826 |
| 0.6674 | 10.92 | 71 | 0.7328 | 0.7609 |
| 0.6674 | 12.0 | 78 | 0.6968 | 0.7391 |
| 0.568 | 12.92 | 84 | 0.6456 | 0.8478 |
| 0.4723 | 14.0 | 91 | 0.6528 | 0.8043 |
| 0.4723 | 14.92 | 97 | 0.7107 | 0.6739 |
| 0.4256 | 16.0 | 104 | 0.6335 | 0.7609 |
| 0.3524 | 16.92 | 110 | 0.5953 | 0.8261 |
| 0.3524 | 18.0 | 117 | 0.5824 | 0.8261 |
| 0.3282 | 18.92 | 123 | 0.6329 | 0.7174 |
| 0.3074 | 20.0 | 130 | 0.5775 | 0.8043 |
| 0.3074 | 20.92 | 136 | 0.5770 | 0.8043 |
| 0.3076 | 22.0 | 143 | 0.5749 | 0.8261 |
| 0.3076 | 22.15 | 144 | 0.5747 | 0.8261 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Karen-Teng/my-awesome-text-classification | Karen-Teng | "2023-12-08T07:30:59Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-08T07:30:34Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my-awesome-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3965
- Accuracy: 0.9487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 469 | 2.7427 | 0.7947 |
| 3.9521 | 2.0 | 938 | 1.2087 | 0.8933 |
| 1.8352 | 3.0 | 1407 | 0.6336 | 0.934 |
| 0.8168 | 4.0 | 1876 | 0.4447 | 0.9457 |
| 0.4589 | 5.0 | 2345 | 0.3965 | 0.9487 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
muhtasham/tiny-mlm-glue-cola-from-scratch-custom-tokenizer-target-glue-qqp | muhtasham | "2023-01-13T00:49:20Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-01-13T00:18:52Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: tiny-mlm-glue-cola-from-scratch-custom-tokenizer-target-glue-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-cola-from-scratch-custom-tokenizer-target-glue-qqp
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-cola-from-scratch-custom-tokenizer](https://huggingface.co/muhtasham/tiny-mlm-glue-cola-from-scratch-custom-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5427
- Accuracy: 0.7318
- F1: 0.5719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6414 | 0.04 | 500 | 0.6217 | 0.6544 | 0.4756 |
| 0.6157 | 0.09 | 1000 | 0.6047 | 0.6738 | 0.5091 |
| 0.6019 | 0.13 | 1500 | 0.5899 | 0.6825 | 0.5814 |
| 0.5812 | 0.18 | 2000 | 0.5686 | 0.7068 | 0.5646 |
| 0.572 | 0.22 | 2500 | 0.5590 | 0.7170 | 0.5716 |
| 0.5641 | 0.26 | 3000 | 0.5543 | 0.7194 | 0.5891 |
| 0.5572 | 0.31 | 3500 | 0.5491 | 0.7252 | 0.5919 |
| 0.5529 | 0.35 | 4000 | 0.5529 | 0.7199 | 0.6169 |
| 0.556 | 0.4 | 4500 | 0.5471 | 0.7260 | 0.6145 |
| 0.5482 | 0.44 | 5000 | 0.5427 | 0.7318 | 0.5719 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
HueyNemud/das22-10-camembert_pretrained | HueyNemud | "2022-05-19T12:05:12Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
tags:
- generated_from_trainer
model-index:
- name: CamemBERT pretrained on french trade directories from the XIXth century
results: []
---
# CamemBERT pretrained on french trade directories from the XIXth century
This mdoel is part of the material of the paper
> Abadie, N., Carlinet, E., Chazalon, J., Duménieu, B. (2022). A
> Benchmark of Named Entity Recognition Approaches in Historical
> Documents Application to 19𝑡ℎ Century French Directories. In: Uchida,
> S., Barney, E., Eglin, V. (eds) Document Analysis Systems. DAS 2022.
> Lecture Notes in Computer Science, vol 13237. Springer, Cham.
> https://doi.org/10.1007/978-3-031-06555-2_30
The source code to train this model is available on the [GitHub repository](https://github.com/soduco/paper-ner-bench-das22) of the paper as a Jupyter notebook in `src/ner/10-camembert_pretraining.ipynb`.
## Model description
This model pre-train the model [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on a set of ~845k entries from Paris trade directories from the XIXth century extracted with OCR.
Trade directory entries are short and strongly structured texts that giving the name, activity and location of a person or business, e.g:
```
Peynaud, R. de la Vieille Bouclerie, 18. Richard, Joullain et comp., (commission- —Phéâtre Français. naire, (entrepôt), au port de la Rapée-
```
## Intended uses & limitations
This model is intended for reproducibility of the NER evaluation published in the DAS2022 paper.
Several derived models trained for NER on trade directories are available on HuggingFace, each trained on a different dataset :
- [das22-10-camembert_pretrained_finetuned_ref](): trained for NER on ~6000 directory entries manually corrected.
- [das22-10-camembert_pretrained_finetuned_pero](): trained for NER on ~6000 directory entries extracted with PERO-OCR.
- [das22-10-camembert_pretrained_finetuned_tess](): trained for NER on ~6000 directory entries extracted with Tesseract.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.9603 | 1.0 | 100346 | 1.8005 |
| 1.7032 | 2.0 | 200692 | 1.6460 |
| 1.5879 | 3.0 | 301038 | 1.5570 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
vidore/colqwen2.5-v0.1 | vidore | "2025-02-21T09:50:28Z" | 881 | 0 | colpali | [
"colpali",
"safetensors",
"vidore",
"vidore-experimental",
"visual-document-retrieval",
"en",
"arxiv:2004.12832",
"arxiv:2407.01449",
"arxiv:2106.09685",
"base_model:vidore/colqwen2.5-base",
"base_model:finetune:vidore/colqwen2.5-base",
"license:mit",
"region:us"
] | visual-document-retrieval | "2025-01-30T09:39:30Z" | ---
license: mit
library_name: colpali
base_model: vidore/colqwen2.5-base
language:
- en
tags:
- colpali
- vidore
- vidore-experimental
pipeline_tag: visual-document-retrieval
---
# ColQwen2.5: Visual Retriever based on Qwen2.5-VL-3B-Instruct with ColBERT strategy
ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2.5-VL-3B](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
## Version specificity
This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
Maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.
This version is trained with `colpali-engine==0.3.7`.
Data is the same as the ColPali data described in the paper.
## Model Training
### Dataset
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
A validation set is created with 2% of the samples to tune hyperparameters.
*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
### Parameters
All models are trained for 1 epoch on the train set. Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
with `alpha=32` and `r=32` on the transformer layers from the language model,
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
We train on an 8 GPU setup with data parallelism, a learning rate of 5e-5 with linear decay with 2.5% warmup steps, and a batch size of 32.
## Usage
Make sure `colpali-engine` is installed from source or with a version superior to 0.3.1.
`transformers` version must be > 4.45.0.
```bash
pip install git+https://github.com/illuin-tech/colpali
```
```python
import torch
from PIL import Image
from transformers.utils.import_utils import is_flash_attn_2_available
from colpali_engine.models import ColQwen2_5, ColQwen2_5_Processor
model = ColQwen2_5.from_pretrained(
"vidore/colqwen2.5-v0.1",
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None,
).eval()
processor = ColQwen2_5_Processor.from_pretrained("vidore/colqwen2.5-v0.1")
# Your inputs
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
```
## Limitations
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
## License
ColQwen2.5's vision language backbone model (Qwen2.5-VL) is under `Qwen RESEARCH LICENSE AGREEMENT` license. The adapters attached to the model are under MIT license.
## Contact
- Manuel Faysse: [email protected]
- Hugues Sibille: [email protected]
- Tony Wu: [email protected]
## Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
```
|
danieliuspodb/llama-3.2-extremist2-Q4_K_M-GGUF | danieliuspodb | "2025-02-17T19:50:29Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:danieliuspodb/llama-3.2-extremist2",
"base_model:quantized:danieliuspodb/llama-3.2-extremist2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-17T19:50:22Z" | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: danieliuspodb/llama-3.2-extremist2
---
# danieliuspodb/llama-3.2-extremist2-Q4_K_M-GGUF
This model was converted to GGUF format from [`danieliuspodb/llama-3.2-extremist2`](https://huggingface.co/danieliuspodb/llama-3.2-extremist2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/danieliuspodb/llama-3.2-extremist2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo danieliuspodb/llama-3.2-extremist2-Q4_K_M-GGUF --hf-file llama-3.2-extremist2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo danieliuspodb/llama-3.2-extremist2-Q4_K_M-GGUF --hf-file llama-3.2-extremist2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo danieliuspodb/llama-3.2-extremist2-Q4_K_M-GGUF --hf-file llama-3.2-extremist2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo danieliuspodb/llama-3.2-extremist2-Q4_K_M-GGUF --hf-file llama-3.2-extremist2-q4_k_m.gguf -c 2048
```
|
M9and2M/marone_wolof_wav2vec2-xls-r-300m | M9and2M | "2024-07-26T14:09:41Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"wo",
"dataset:M9and2M/Wolof_ASR_dataset",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-15T01:10:35Z" | ---
license: mit
datasets:
- M9and2M/Wolof_ASR_dataset
language:
- wo
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# Wolof ASR Model (Based on Whisper-Small)
## Model Overview
This repository hosts an Automatic Speech Recognition (ASR) model for the Wolof language, fine-tuned from Wav2Vce2.0 model. This model aims to provide accurate transcription of Wolof audio data.
## Model Details
- **Model Base**: [wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
- **Loss**: 0.1604
- **WER**: 0.24
## Dataset
The dataset used for training and evaluating this model is a collection from various sources, ensuring a rich and diverse set of Wolof audio samples. The collection is available in my Hugging Face account is used by keeping only the audios with duration shorter than 6 second.
- **Training Dataset**: 57 hours
- **Test Dataset**: 10 hours
For detailed information about the dataset, please refer to the [M9and2M/Wolof_ASR_dataset](https://huggingface.co/datasets/M9and2M/Wolof_ASR_dataset).
## Training
The training process was adapted from the code in the [Finetune Wa2vec 2.0 For Speech Recognition](https://github.com/khanld/ASR-Wa2vec-Finetune) written to fine-tune Wav2Vec2.0 for speech recognition. Special thanks to the author, Duy Khanh, Le for providing a robust and flexible training framework.
The model was trained with the following configuration:
- **Seed**: 19
- **Training Batch Size**: 4
- **Gradient Accumulation Steps**: 8
- **Number of GPUs**: 2
### Optimizer : AdamW
- **Learning Rate**: 1e-6
### Scheduler: OneCycleLR
- **Max Learning Rate**: 5e-5
## Acknowledgements
This model was built using Facebook's [Wav2Vec2.0](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) architecture and fine-tuned with a dataset collected from various sources. Special thanks to the creators and contributors of the dataset.
<!-- ## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!-- **BibTeX:** -->
<!-- [More Information Needed] -->
<!-- **APA:** -->
## More Information
This model has been developed in the context of my Master Thesis at ETSIT-UPM, Madrid under the supervision of Prof. Luis A. Hernández Gómez.
## Contact
For any inquiries or questions, please contact [email protected] |
tomaszki/nous-thirty-eight | tomaszki | "2024-03-01T15:27:47Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-01T15:27:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PrunaAI/teknium-OpenHermes-2-Mistral-7B-bnb-4bit-smashed | PrunaAI | "2024-08-02T15:44:05Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-03T08:48:46Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo teknium/OpenHermes-2-Mistral-7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/teknium-OpenHermes-2-Mistral-7B-bnb-4bit-smashed",
trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("teknium/OpenHermes-2-Mistral-7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model teknium/OpenHermes-2-Mistral-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
siliconhealth/radio-aia-ner-byt5 | siliconhealth | "2025-01-02T10:34:16Z" | 112 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-01-02T10:31:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chchen/Llama-3.1-8B-Instruct-KTO-600 | chchen | "2025-01-16T18:44:54Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"trl",
"kto",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | "2025-01-16T17:40:01Z" | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- llama-factory
- lora
- trl
- kto
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-KTO-600
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-KTO-600
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the bct_non_cot_kto_600 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2017
- Rewards/chosen: 0.0412
- Logps/chosen: -18.3761
- Logits/chosen: -2496719.4921
- Rewards/rejected: -6.6216
- Logps/rejected: -86.0225
- Logits/rejected: -7772195.3684
- Rewards/margins: 6.6628
- Kl: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Logps/chosen | Logits/chosen | Rewards/rejected | Logps/rejected | Logits/rejected | Rewards/margins | |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:------------:|:-------------:|:----------------:|:--------------:|:---------------:|:---------------:|:------:|
| 0.4993 | 0.7407 | 50 | 0.4998 | 0.0053 | -18.7353 | -4714148.5714 | 0.0043 | -19.7637 | -7797900.9123 | 0.0010 | 2.5227 |
| 0.4763 | 1.4815 | 100 | 0.4762 | 0.1369 | -17.4186 | -4475767.3651 | -0.0490 | -20.2970 | -7752298.6667 | 0.1860 | 5.6644 |
| 0.3669 | 2.2222 | 150 | 0.3865 | 0.1420 | -17.3676 | -3437302.8571 | -0.9359 | -29.1656 | -7374456.1404 | 1.0779 | 0.0 |
| 0.2687 | 2.9630 | 200 | 0.2844 | 0.3564 | -15.2243 | -3008007.1111 | -2.3051 | -42.8578 | -7507831.0175 | 2.6615 | 0.1954 |
| 0.2398 | 3.7037 | 250 | 0.2238 | 0.4618 | -14.1696 | -2773128.1270 | -4.0572 | -60.3789 | -7716537.2632 | 4.5191 | 0.0 |
| 0.2508 | 4.4444 | 300 | 0.2089 | 0.3865 | -14.9233 | -2774151.1111 | -5.0725 | -70.5321 | -7890091.7895 | 5.4590 | 0.0 |
| 0.1947 | 5.1852 | 350 | 0.2057 | 0.2042 | -16.7464 | -2611237.0794 | -5.9252 | -79.0592 | -7821654.4561 | 6.1294 | 0.0 |
| 0.1666 | 5.9259 | 400 | 0.2027 | 0.1387 | -17.4006 | -2482929.2698 | -6.1703 | -81.5101 | -7752611.9298 | 6.3091 | 0.0 |
| 0.1956 | 6.6667 | 450 | 0.2023 | 0.1210 | -17.5785 | -2528993.0159 | -6.2460 | -82.2664 | -7765871.1579 | 6.3669 | 0.0 |
| 0.1888 | 7.4074 | 500 | 0.2026 | 0.0571 | -18.2172 | -2538207.2381 | -6.5054 | -84.8605 | -7796628.2105 | 6.5625 | 0.0 |
| 0.2411 | 8.1481 | 550 | 0.2024 | 0.0368 | -18.4202 | -2527997.9683 | -6.6091 | -85.8983 | -7806604.3509 | 6.6459 | 0.0 |
| 0.2231 | 8.8889 | 600 | 0.2018 | 0.0382 | -18.4056 | -2503114.1587 | -6.5431 | -85.2377 | -7783503.1579 | 6.5813 | 0.0 |
| 0.1966 | 9.6296 | 650 | 0.2017 | 0.0412 | -18.3761 | -2496719.4921 | -6.6216 | -86.0225 | -7772195.3684 | 6.6628 | 0.0 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
kawauso/distilbert-base-uncased-finetuned-emotion | kawauso | "2022-12-06T19:49:54Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-11-29T21:17:12Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9255179580374608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2213
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8391 | 1.0 | 250 | 0.3177 | 0.9035 | 0.9006 |
| 0.2526 | 2.0 | 500 | 0.2213 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
rizvi-rahil786/distilbert-base-uncased-usFlood | rizvi-rahil786 | "2024-03-15T08:45:07Z" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-15T08:37:14Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-usFlood
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-usFlood
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3688 | 1.0 | 3053 | 0.4474 |
| 0.3327 | 2.0 | 6106 | 0.2575 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
toxicwind/flux-training-murakami-flowers | toxicwind | "2024-08-13T05:50:16Z" | 6 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-08-12T21:12:17Z" | ---
license: creativeml-openrail-m
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'sprinkled donut in fuchihana style with heart eyes, white border, green background'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
---
# flux-training-murakami-flowers
This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
The main validation prompt used during training was:
```
sprinkled donut in fuchihana style with heart eyes, white border, green background
```
## Validation settings
- CFG: `3.5`
- CFG Rescale: `0.0`
- Steps: `15`
- Sampler: `None`
- Seed: `42`
- Resolution: `1024`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 54
- Training steps: 2000
- Learning rate: 0.0001
- Effective batch size: 6
- Micro-batch size: 6
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: AdamW, stochastic bf16
- Precision: Pure BF16
- Xformers: Enabled
- LoRA Rank: 32
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### flux-murakami-flowers
- Repeats: 0
- Total number of images: 222
- Total number of aspect buckets: 1
- Resolution: 512 px
- Cropped: True
- Crop style: center
- Crop aspect: square
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'toxicwind/flux-training-murakami-flowers'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "sprinkled donut in fuchihana style with heart eyes, white border, green background"
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=15,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")
```
|
subhash05/eagle | subhash05 | "2024-03-13T10:41:39Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-13T10:32:59Z" | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Eagle Dreambooth model trained by dawudvali04 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 22231A4215
Sample pictures of this concept:
.jpg)
.jpg)

.jpg)
.jpg)
.jpg)
.jpg)
|
mradermacher/Asherah_7B-GGUF | mradermacher | "2024-12-23T19:13:10Z" | 13 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"dataset:Epiculous/Gnosis",
"base_model:ResplendentAI/Asherah_7B",
"base_model:quantized:ResplendentAI/Asherah_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-23T18:10:23Z" | ---
base_model: ResplendentAI/Asherah_7B
datasets:
- ResplendentAI/Synthetic_Soul_1k
- Epiculous/Gnosis
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ResplendentAI/Asherah_7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Asherah_7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Asherah_7B-GGUF/resolve/main/Asherah_7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Punter1504/wav2vec2-base-timit-demo-google-colab | Punter1504 | "2024-04-27T20:00:23Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-27T19:34:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AUTOMATIC/promptgen-majinai-safe | AUTOMATIC | "2023-01-18T21:13:41Z" | 474 | 16 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-01-18T21:04:42Z" | ---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
Finetuned `distilgpt2` for 40 epochs on 1654 prompts scraped from majinai.art. Weights/emphasis stripped. Includes negative prompts.
Intended for use with https://github.com/AUTOMATIC1111/stable-diffusion-webui-promptgen
|
Tran1234/SmolLM2-FT-MyDataset | Tran1234 | "2024-12-12T12:32:19Z" | 42 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-12T12:31:50Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Tran1234/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tranha1122c-s/huggingface/runs/lcimoigh)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ShenaoZ/0.0005_zephyr_withdpo_5551_4iters_bs256_newtrl_iter_2 | ShenaoZ | "2024-05-13T11:17:16Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-05-13T11:12:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LHRuig/mearniesx | LHRuig | "2025-02-20T05:09:06Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-20T05:08:32Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mearniesx
---
# mearniesx
<Gallery />
## Model description
mearniesx lora
## Trigger words
You should use `mearniesx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/mearniesx/tree/main) them in the Files & versions tab.
|
waynejustco/justChat | waynejustco | "2025-02-20T03:24:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"updated",
"version-1",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-20T02:27:40Z" | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- updated
- version-1
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** waynejustco
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HachiML/ReasoningVector-DeepSeek-R1-Distill-Llama-8B | HachiML | "2025-02-18T15:29:33Z" | 0 | 0 | null | [
"safetensors",
"llama",
"reasoning",
"reasoning-vector",
"weight-diff",
"deepseek",
"en",
"ja",
"license:llama3.1",
"region:us"
] | null | "2025-02-18T13:21:52Z" | ---
tags:
- reasoning
- reasoning-vector
- weight-diff
- llama
- deepseek
license: llama3.1
language:
- en
- ja
---
# Reasoning Vector
## 概要
**Reasoning Vector** は、ベースモデルとReasoningモデル間の重みの差分を抽出する手法により生成されたモデルです。
本モデルは、ChatVectorと同様の生成方法を採用しており、追加学習したベースモデルに対して推論能力(Reasoning)を追加するために使用されます。
単体では使用できません。
### モデル
- **Base Model**: [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
- **Reasoning Model**: [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
## 使用方法
以下は、ベースモデルにReasoning Vectorを適用して推論モデルを生成する際の例です。
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# ベースモデルのロード
base_model = AutoModelForCausalLM.from_pretrained("your-base-model")
tokenizer = AutoTokenizer.from_pretrained("your-base-model")
# Reasoning Vectorのロード(差分パラメータ)
reasoning_vector = AutoModelForCausalLM.from_pretrained("HachiML/ReasoningVector-DeepSeek-R1-Distill-Llama-8B")
# ベースモデルに差分を適用(実装に応じた適用方法を記載)
# 除外対象
skip_layers = ["model.embed_tokens.weight", "model.norm.weight", "lm_head.weight"]
for k, v in base_model.state_dict().items():
# layernormも除外
if (k in skip_layers) or ("layernorm" in k):
continue
new_v += reasoning_vector.state_dict()[k].to(v.device)
v.copy_(new_v)
# 推論の実行例
inputs = tokenizer("推論したいテキストを入力", return_tensors="pt")
outputs = base_model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
kostiantynk/9f47bedf-e620-44dc-a3cd-ae5edc5612cd | kostiantynk | "2025-01-31T08:44:49Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T08:10:10Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9f47bedf-e620-44dc-a3cd-ae5edc5612cd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 76456b933bd6f3db_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/76456b933bd6f3db_train_data.json
type:
field_input: tokens
field_instruction: wikimedia_file
field_output: caption
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/9f47bedf-e620-44dc-a3cd-ae5edc5612cd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/76456b933bd6f3db_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399
wandb_project: Birthday-SN56-7-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 80ebeba5-ab02-4d0a-89cc-f03ad9df2399
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9f47bedf-e620-44dc-a3cd-ae5edc5612cd
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.2820 |
| 0.1616 | 0.0004 | 13 | 0.0460 |
| 0.1001 | 0.0007 | 26 | 0.0358 |
| 0.0308 | 0.0011 | 39 | 0.0332 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mohammadnpak/hakim | mohammadnpak | "2024-01-31T15:05:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-01-31T10:23:38Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: hakim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hakim
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4263 | 0.1 | 921 | 1.3717 |
| 1.2695 | 0.2 | 1842 | 1.2659 |
| 1.2029 | 0.3 | 2763 | 1.2218 |
| 1.2017 | 0.4 | 3684 | 1.1812 |
| 1.1308 | 0.5 | 4605 | 1.1586 |
| 1.1521 | 0.6 | 5526 | 1.1369 |
| 1.1189 | 0.7 | 6447 | 1.1188 |
| 1.1586 | 0.8 | 7368 | 1.1097 |
| 1.0841 | 0.9 | 8289 | 1.0965 |
### Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
gmurillo/set-fit-goup-4-f | gmurillo | "2023-07-12T18:57:16Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bart",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-07-12T18:55:50Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gmurillo/set-fit-goup-4-f
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gmurillo/set-fit-goup-4-f")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
dadosdq/wbchaop-wallbed | dadosdq | "2023-01-21T04:06:07Z" | 4 | 0 | diffusers | [
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"wildcard",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-21T04:04:51Z" | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- wildcard
widget:
- text: a photo of a white wbchaop wallbed
---
# DreamBooth model for the wbchaop concept trained by dadosdq on the dadosdq/wallbed_dataset dataset.
This is a Stable Diffusion model fine-tuned on the wbchaop concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of wbchaop wallbed**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `wallbed` images for the wildcard theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('dadosdq/wbchaop-wallbed')
image = pipeline().images[0]
image
```
|
asun17904/glue-qnli-t5-base-alum | asun17904 | "2024-02-02T02:33:25Z" | 0 | 0 | pytorch | [
"pytorch",
"en",
"license:mit",
"region:us"
] | null | "2024-02-01T20:25:08Z" | ---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 4
- `gradient_accumulation_steps` = 4
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|
Augustya07/Mistral-7B-Instruct-v0.2-sft-test-push-adapters | Augustya07 | "2024-01-30T13:18:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-30T13:17:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stablediffusionapi/real-cartoon-realist | stablediffusionapi | "2025-01-20T11:29:15Z" | 37 | 0 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-09-05T10:21:20Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Real Cartoon Realistic API Inference

## Get API Key
Get API key from [ModelsLab](https://modelslab.com/), No Payment needed.
Replace Key in below code, change **model_id** to "real-cartoon-realist"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/real-cartoon-realist)
Model link: [View model](https://stablediffusionapi.com/models/real-cartoon-realist)
Credits: [View credits](https://civitai.com/?query=Real%20Cartoon%20Realistic)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "real-cartoon-realist",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Pinkstack/PARM-v1-Qwen2.5-O.1-0.5B-VLLM | Pinkstack | "2025-01-16T14:37:06Z" | 90 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-15T22:04:52Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
base model |
timbrooks/instruct-pix2pix | timbrooks | "2023-07-05T16:19:25Z" | 78,060 | 1,061 | diffusers | [
"diffusers",
"safetensors",
"image-to-image",
"license:mit",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] | image-to-image | "2023-01-20T04:27:06Z" | ---
license: mit
tags:
- image-to-image
---
# InstructPix2Pix: Learning to Follow Image Editing Instructions
GitHub: https://github.com/timothybrooks/instruct-pix2pix
<img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
## Example
To use `InstructPix2Pix`, install `diffusers` using `main` for now. The pipeline will be available in the next release
```bash
pip install diffusers accelerate safetensors transformers
```
```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
url = "https://raw.githubusercontent.com/timothybrooks/instruct-pix2pix/main/imgs/example.jpg"
def download_image(url):
image = PIL.Image.open(requests.get(url, stream=True).raw)
image = PIL.ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(url)
prompt = "turn him into cyborg"
images = pipe(prompt, image=image, num_inference_steps=10, image_guidance_scale=1).images
images[0]
``` |
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3_l6_v100 | KingKazma | "2023-08-11T21:54:07Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-11T20:35:21Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Romain-XV/9508c439-7994-4bf6-9610-19c544b83990 | Romain-XV | "2025-02-07T10:20:53Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-02-07T08:15:27Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9508c439-7994-4bf6-9610-19c544b83990
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6cc7122f63033ef0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6cc7122f63033ef0_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/9508c439-7994-4bf6-9610-19c544b83990
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 213
micro_batch_size: 4
mlflow_experiment_name: /tmp/6cc7122f63033ef0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d6040c50-f811-4bce-b162-c853adfdf3aa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d6040c50-f811-4bce-b162-c853adfdf3aa
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9508c439-7994-4bf6-9610-19c544b83990
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 213
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 19.1336 | 0.0013 | 1 | 1.1686 |
| 14.3021 | 0.1307 | 100 | 0.9737 |
| 15.7535 | 0.2613 | 200 | 0.9627 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
OldCrazyCoder/ppo-SnowballTarget | OldCrazyCoder | "2023-08-23T18:49:57Z" | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-08-23T18:49:55Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: OldCrazyCoder/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Minbyul/meditron-7b-wo-live_qa-iter-sft-step1 | Minbyul | "2024-05-11T14:41:03Z" | 9 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:epfl-llm/meditron-7b",
"base_model:finetune:epfl-llm/meditron-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-11T14:14:43Z" | ---
license: llama2
base_model: epfl-llm/meditron-7b
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/deita-10k-v0-sft
model-index:
- name: meditron-7b-wo-live_qa-iter-sft-step1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meditron-7b-wo-live_qa-iter-sft-step1
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4036 | 0.96 | 19 | 1.4487 |
| 2.0418 | 1.97 | 39 | 1.4852 |
| 1.8471 | 2.89 | 57 | 1.5597 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
DBusAI/PPO-BipedalWalker-v3-v2 | DBusAI | "2022-05-13T16:46:30Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-13T16:40:07Z" | ---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 303.47 +/- 1.90
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF | mradermacher | "2025-01-26T16:35:06Z" | 255 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"sft",
"en",
"base_model:NextGLab/ORANSight_Gemma_2_2B_Instruct",
"base_model:quantized:NextGLab/ORANSight_Gemma_2_2B_Instruct",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-26T16:09:28Z" | ---
base_model: NextGLab/ORANSight_Gemma_2_2B_Instruct
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NextGLab/ORANSight_Gemma_2_2B_Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Gemma_2_2B_Instruct-GGUF/resolve/main/ORANSight_Gemma_2_2B_Instruct.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Proton-Tony/Meta_SAM | Proton-Tony | "2023-04-17T15:35:56Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2023-04-16T20:28:52Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
Meta Segment-anything
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Leul78/sftwork | Leul78 | "2024-02-21T09:07:03Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-21T09:03:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DevQuasar/allenai.OLMoE-1B-7B-0125-DPO-GGUF | DevQuasar | "2025-02-12T07:31:38Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:allenai/OLMoE-1B-7B-0125-DPO",
"base_model:quantized:allenai/OLMoE-1B-7B-0125-DPO",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-11T15:14:40Z" | ---
base_model:
- allenai/OLMoE-1B-7B-0125-DPO
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [allenai/OLMoE-1B-7B-0125-DPO](https://huggingface.co/allenai/OLMoE-1B-7B-0125-DPO)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
great0001/7506ed42-94d0-4990-bd4e-fc9c80cef0c1 | great0001 | "2025-01-18T01:24:53Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-01-18T01:24:02Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7506ed42-94d0-4990-bd4e-fc9c80cef0c1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ab0406911ff3d27c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ab0406911ff3d27c_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/7506ed42-94d0-4990-bd4e-fc9c80cef0c1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ab0406911ff3d27c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb4495a3-40e6-47d5-8acb-8dbb8fc1970b
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bb4495a3-40e6-47d5-8acb-8dbb8fc1970b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7506ed42-94d0-4990-bd4e-fc9c80cef0c1
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0029 | 1 | nan |
| 0.0 | 0.0086 | 3 | nan |
| 0.0 | 0.0171 | 6 | nan |
| 0.0 | 0.0257 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TheBloke/airoboros-33b-gpt4-GGML | TheBloke | "2023-06-11T13:56:53Z" | 0 | 6 | null | [
"dataset:jondurbin/airoboros-gpt4-1.1",
"license:other",
"region:us"
] | null | "2023-06-11T12:26:19Z" | ---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.1
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Jon Durbin's Airoboros 33B GPT4 GGML
These files are GGML format model files for [Jon Durbin's Airoboros 33B GPT4](https://huggingface.co/jondurbin/airoboros-33b-gpt4).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-33b-gpt4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-33b-gpt4-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-33b-gpt4)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airoboros-33b-gpt4.ggmlv3.q2_K.bin | q2_K | 2 | 13.60 GB | 16.10 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| airoboros-33b-gpt4.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.20 GB | 19.70 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-33b-gpt4.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.64 GB | 18.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-33b-gpt4.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 13.98 GB | 16.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| airoboros-33b-gpt4.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
| airoboros-33b-gpt4.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airoboros-33b-gpt4.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.57 GB | 22.07 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| airoboros-33b-gpt4.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.30 GB | 20.80 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| airoboros-33b-gpt4.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airoboros-33b-gpt4.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airoboros-33b-gpt4.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.02 GB | 25.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| airoboros-33b-gpt4.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.37 GB | 24.87 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| airoboros-33b-gpt4.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| airoboros-33b-gpt4.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m airoboros-33b-gpt4.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros 33B GPT4
## Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
Bluberries are green.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
The model is quite decent compared to other local models at generating code, writing, trivia, etc. Give it a shot at anything, and let me know where it falls apart.
|
Trelis/99-v9 | Trelis | "2024-09-24T02:28:28Z" | 127 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:Trelis/SmolLM-135M-Instruct-layer-pruned-90M-raw",
"base_model:finetune:Trelis/SmolLM-135M-Instruct-layer-pruned-90M-raw",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-24T02:28:22Z" | ---
library_name: transformers
license: apache-2.0
base_model: Trelis/SmolLM-135M-Instruct-layer-pruned-90M-raw
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: 99-v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 99-v9
This model is a fine-tuned version of [Trelis/SmolLM-135M-Instruct-layer-pruned-90M-raw](https://huggingface.co/Trelis/SmolLM-135M-Instruct-layer-pruned-90M-raw) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.005
- lr_scheduler_warmup_steps: 89
- training_steps: 17894
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.6331 | 0.0500 | 894 | 0.6004 |
| 0.5667 | 0.0999 | 1788 | 0.5463 |
| 0.5423 | 0.1499 | 2682 | 0.5138 |
| 0.5749 | 0.1998 | 3576 | 0.7377 |
| 0.5378 | 0.2498 | 4470 | 0.7542 |
| 0.506 | 0.2998 | 5364 | 0.7902 |
| 0.5561 | 0.3497 | 6258 | 0.7810 |
| 0.5259 | 0.3997 | 7152 | 0.7914 |
| 0.5516 | 0.4496 | 8046 | 0.7611 |
| 0.5131 | 0.4996 | 8940 | 0.6860 |
| 0.5069 | 0.5496 | 9834 | 0.7247 |
| 0.4977 | 0.5995 | 10728 | 0.7375 |
| 0.4976 | 0.6495 | 11622 | 0.7436 |
| 0.5018 | 0.6995 | 12516 | 0.7520 |
| 0.537 | 0.7494 | 13410 | 0.7613 |
| 0.5018 | 0.7994 | 14304 | 0.6922 |
| 0.4891 | 0.8493 | 15198 | 0.7322 |
| 0.4808 | 0.8993 | 16092 | 0.7430 |
| 0.5231 | 0.9493 | 16986 | 0.7546 |
| 0.5103 | 0.9992 | 17880 | 0.7495 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
furrutiav/bert_qa_extractor_cockatiel_2022_best_ef_z_value_it_812 | furrutiav | "2024-02-26T00:18:47Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-02-26T00:18:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WurmWillem/DialoGPT-medium-RickandMorty3 | WurmWillem | "2021-09-29T17:15:34Z" | 0 | 0 | null | [
"conversational",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
tags:
- conversational
---
|
Asheyy/dogbooth | Asheyy | "2023-09-20T10:01:40Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-09-17T09:20:41Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Asheyy/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
YakovElm/Apache20Classic_MSE | YakovElm | "2023-06-09T18:52:47Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-09T18:52:13Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_MSE
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_MSE
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0360
- Train Accuracy: 0.4809
- Validation Loss: 0.0876
- Validation Accuracy: 0.7816
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0451 | 0.4930 | 0.0829 | 0.8833 | 0 |
| 0.0374 | 0.5074 | 0.0845 | 0.4316 | 1 |
| 0.0360 | 0.4809 | 0.0876 | 0.7816 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
u23429/headline-predictor | u23429 | "2023-03-27T22:05:18Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:u23429/autotrain-data-stock-distil",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-27T21:58:02Z" | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- u23429/autotrain-data-stock-distil
co2_eq_emissions:
emissions: 2.960971697133151
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 44339111846
- CO2 Emissions (in grams): 2.9610
## Validation Metrics
- Loss: 1.634
- Accuracy: 0.940
- Macro F1: 0.882
- Micro F1: 0.940
- Weighted F1: 0.924
- Macro Precision: 0.876
- Micro Precision: 0.940
- Weighted Precision: 0.914
- Macro Recall: 0.900
- Micro Recall: 0.940
- Weighted Recall: 0.940
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/u23429/autotrain-stock-distil-44339111846
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("u23429/autotrain-stock-distil-44339111846", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("u23429/autotrain-stock-distil-44339111846", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF | MaziyarPanahi | "2024-01-26T06:35:42Z" | 51 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"MRAIRR/MRAI_synatra_7B_v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp",
"base_model:quantized:MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp",
"conversational"
] | text-generation | "2024-01-25T15:08:46Z" | ---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- MRAIRR/MRAI_synatra_7B_v1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF
base_model: MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp)
## Description
[MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF) contains GGUF format model files for [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF) and below it, a specific filename to download, such as: MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
mradermacher/Irene-RP-v3-7B-GGUF | mradermacher | "2024-05-06T06:04:42Z" | 3 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"mistral",
"roleplay",
"en",
"base_model:Virt-io/Irene-RP-v3-7B",
"base_model:quantized:Virt-io/Irene-RP-v3-7B",
"endpoints_compatible",
"region:us"
] | null | "2024-03-22T04:33:08Z" | ---
base_model: Virt-io/Irene-RP-v3-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- mistral
- roleplay
---
## About
static quants of https://huggingface.co/Virt-io/Irene-RP-v3-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Irene-RP-v3-7B-GGUF/resolve/main/Irene-RP-v3-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kilfyuvgunn/J | kilfyuvgunn | "2025-02-18T20:25:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-18T20:24:40Z" | # Asking five questions
name = input("What is your name? ")
age = input("How old are you? ")
favorite_color = input("What is your favorite color? ")
hobby = input("What is your favorite hobby? ")
dream_job = input("What is your dream job? ")
# Printing responses
print("\nHere are your answers:")
print(f"Name: {name}")
print(f"Age: {age}")
print(f"Favorite Color: {favorite_color}")
print(f"Favorite Hobby: {hobby}")
print(f"Dream Job: {dream_job}") |
andrei-teodor/resnet-pretrained-brain-mri | andrei-teodor | "2024-08-28T12:47:34Z" | 205 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-08-26T15:49:43Z" | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-pretrained-brain-mri
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-pretrained-brain-mri
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the BrainMRI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1450
- Accuracy: 0.5228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| No log | 1.0 | 72 | 0.4704 | 1.2440 |
| 1.2771 | 2.0 | 144 | 0.5575 | 1.1610 |
| 1.1543 | 3.0 | 216 | 0.6446 | 1.0949 |
| 1.1543 | 4.0 | 288 | 0.6812 | 1.0361 |
| 1.0664 | 5.0 | 360 | 0.6742 | 1.0100 |
| 0.9998 | 6.0 | 432 | 0.7003 | 0.9687 |
| 0.9537 | 7.0 | 504 | 0.6986 | 0.9484 |
| 0.9537 | 8.0 | 576 | 0.6934 | 0.9285 |
| 0.9239 | 9.0 | 648 | 0.7108 | 0.8992 |
| 0.893 | 10.0 | 720 | 0.7369 | 0.8723 |
| 0.893 | 11.0 | 792 | 0.7334 | 0.8635 |
| 0.8726 | 12.0 | 864 | 0.7474 | 0.8589 |
| 0.8482 | 13.0 | 936 | 0.7160 | 0.8423 |
| 0.8461 | 14.0 | 1008 | 0.7300 | 0.8481 |
| 0.8461 | 15.0 | 1080 | 0.7352 | 0.8312 |
| 0.8267 | 16.0 | 1152 | 0.7247 | 0.8319 |
| 0.8163 | 17.0 | 1224 | 0.7456 | 0.8136 |
| 0.8163 | 18.0 | 1296 | 0.7474 | 0.8151 |
| 0.8126 | 19.0 | 1368 | 0.7596 | 0.8071 |
| 0.8022 | 20.0 | 1440 | 0.7491 | 0.8210 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
mradermacher/X-R1-1.5B-GGUF | mradermacher | "2025-02-13T00:21:02Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:xiaodongguaAIGC/X-R1-1.5B",
"base_model:quantized:xiaodongguaAIGC/X-R1-1.5B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-12T23:49:28Z" | ---
base_model: xiaodongguaAIGC/X-R1-1.5B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xiaodongguaAIGC/X-R1-1.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/X-R1-1.5B-GGUF/resolve/main/X-R1-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kanwal-mehreen18/hindi-gemma9b-A50 | kanwal-mehreen18 | "2025-01-23T18:22:36Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2-9b-it",
"base_model:finetune:unsloth/gemma-2-9b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-23T18:16:21Z" | ---
base_model: unsloth/gemma-2-9b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kanwal-mehreen18
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-it
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
glli/ART_WriterModel | glli | "2024-09-26T05:21:08Z" | 22 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2024-09-26T05:16:59Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | "2023-10-17T21:30:48Z" | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] | token-classification | "2023-10-17T13:21:46Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmTEAMS as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6651][1] | [0.6542][2] | [0.66][3] | [0.6705][4] | [0.6702][5] | 66.4 ± 0.62 |
| bs4-e10-lr3e-05 | [0.66][6] | [0.6641][7] | [0.6641][8] | [0.6595][9] | [0.6548][10] | 66.05 ± 0.35 |
| bs8-e10-lr5e-05 | [0.6564][11] | [0.6555][12] | [0.6598][13] | [0.6581][14] | [0.6636][15] | 65.87 ± 0.29 |
| bs4-e10-lr5e-05 | [0.6415][16] | [0.6602][17] | [0.601][18] | [0.6505][19] | [0.6638][20] | 64.34 ± 2.26 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
romainnn/5245836f-40d3-4e05-a231-8462c8c2baff | romainnn | "2025-02-18T16:55:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-18T09:10:15Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5245836f-40d3-4e05-a231-8462c8c2baff
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 40527198fd7180da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/40527198fd7180da_train_data.json
type:
field_input: bodies
field_instruction: decl
field_output: desc
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
early_stopping_threshold: 0.0001
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: romainnn/5245836f-40d3-4e05-a231-8462c8c2baff
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 9983
micro_batch_size: 4
mlflow_experiment_name: /tmp/40527198fd7180da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.00917724190842581
wandb_entity: null
wandb_mode: online
wandb_name: a88f056a-2840-48ca-ada7-c0aa6accc543
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a88f056a-2840-48ca-ada7-c0aa6accc543
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5245836f-40d3-4e05-a231-8462c8c2baff
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 9983
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5279 | 0.0001 | 1 | 3.4427 |
| 2.0594 | 0.0059 | 100 | 2.2386 |
| 2.2409 | 0.0119 | 200 | 2.2071 |
| 1.9584 | 0.0178 | 300 | 2.1853 |
| 1.9313 | 0.0237 | 400 | 2.1659 |
| 1.8711 | 0.0296 | 500 | 2.1537 |
| 1.8007 | 0.0356 | 600 | 2.1435 |
| 2.1542 | 0.0415 | 700 | 2.1296 |
| 2.0665 | 0.0474 | 800 | 2.1189 |
| 2.3934 | 0.0534 | 900 | 2.1166 |
| 2.3511 | 0.0593 | 1000 | 2.1009 |
| 2.0545 | 0.0652 | 1100 | 2.0989 |
| 2.2765 | 0.0711 | 1200 | 2.0876 |
| 1.9124 | 0.0771 | 1300 | 2.0830 |
| 1.9408 | 0.0830 | 1400 | 2.0767 |
| 2.1107 | 0.0889 | 1500 | 2.0676 |
| 2.055 | 0.0948 | 1600 | 2.0619 |
| 1.7226 | 0.1008 | 1700 | 2.0545 |
| 1.8976 | 0.1067 | 1800 | 2.0473 |
| 1.9798 | 0.1126 | 1900 | 2.0380 |
| 1.9494 | 0.1186 | 2000 | 2.0348 |
| 2.0733 | 0.1245 | 2100 | 2.0308 |
| 1.9957 | 0.1304 | 2200 | 2.0251 |
| 2.0361 | 0.1363 | 2300 | 2.0206 |
| 2.2313 | 0.1423 | 2400 | 2.0176 |
| 1.8886 | 0.1482 | 2500 | 2.0082 |
| 2.1336 | 0.1541 | 2600 | 2.0078 |
| 1.8516 | 0.1601 | 2700 | 2.0024 |
| 1.9572 | 0.1660 | 2800 | 1.9952 |
| 1.6813 | 0.1719 | 2900 | 1.9922 |
| 1.7759 | 0.1778 | 3000 | 1.9856 |
| 1.8491 | 0.1838 | 3100 | 1.9809 |
| 2.1544 | 0.1897 | 3200 | 1.9768 |
| 2.0521 | 0.1956 | 3300 | 1.9671 |
| 1.6342 | 0.2015 | 3400 | 1.9630 |
| 2.0625 | 0.2075 | 3500 | 1.9614 |
| 1.9992 | 0.2134 | 3600 | 1.9548 |
| 1.7514 | 0.2193 | 3700 | 1.9516 |
| 1.8403 | 0.2253 | 3800 | 1.9478 |
| 1.9456 | 0.2312 | 3900 | 1.9440 |
| 1.4284 | 0.2371 | 4000 | 1.9384 |
| 2.1788 | 0.2430 | 4100 | 1.9353 |
| 1.7432 | 0.2490 | 4200 | 1.9302 |
| 1.7911 | 0.2549 | 4300 | 1.9272 |
| 1.94 | 0.2608 | 4400 | 1.9212 |
| 1.9083 | 0.2668 | 4500 | 1.9176 |
| 1.5717 | 0.2727 | 4600 | 1.9148 |
| 1.9185 | 0.2786 | 4700 | 1.9123 |
| 1.8033 | 0.2845 | 4800 | 1.9077 |
| 1.9307 | 0.2905 | 4900 | 1.9032 |
| 2.3501 | 0.2964 | 5000 | 1.8999 |
| 1.9069 | 0.3023 | 5100 | 1.8955 |
| 2.0346 | 0.3082 | 5200 | 1.8940 |
| 1.7066 | 0.3142 | 5300 | 1.8910 |
| 1.888 | 0.3201 | 5400 | 1.8858 |
| 1.9475 | 0.3260 | 5500 | 1.8833 |
| 1.6037 | 0.3320 | 5600 | 1.8814 |
| 1.7273 | 0.3379 | 5700 | 1.8777 |
| 1.8923 | 0.3438 | 5800 | 1.8740 |
| 2.3106 | 0.3497 | 5900 | 1.8707 |
| 1.3758 | 0.3557 | 6000 | 1.8684 |
| 1.9701 | 0.3616 | 6100 | 1.8651 |
| 1.854 | 0.3675 | 6200 | 1.8617 |
| 1.5884 | 0.3735 | 6300 | 1.8588 |
| 1.7065 | 0.3794 | 6400 | 1.8564 |
| 1.5681 | 0.3853 | 6500 | 1.8529 |
| 1.9251 | 0.3912 | 6600 | 1.8504 |
| 1.6854 | 0.3972 | 6700 | 1.8487 |
| 1.9832 | 0.4031 | 6800 | 1.8456 |
| 1.2331 | 0.4090 | 6900 | 1.8452 |
| 1.8641 | 0.4149 | 7000 | 1.8421 |
| 1.8737 | 0.4209 | 7100 | 1.8402 |
| 1.6779 | 0.4268 | 7200 | 1.8377 |
| 1.3901 | 0.4327 | 7300 | 1.8359 |
| 1.4336 | 0.4387 | 7400 | 1.8336 |
| 1.7731 | 0.4446 | 7500 | 1.8318 |
| 1.7203 | 0.4505 | 7600 | 1.8304 |
| 1.8094 | 0.4564 | 7700 | 1.8292 |
| 1.8295 | 0.4624 | 7800 | 1.8277 |
| 1.7489 | 0.4683 | 7900 | 1.8271 |
| 1.4063 | 0.4742 | 8000 | 1.8257 |
| 1.6528 | 0.4802 | 8100 | 1.8239 |
| 1.9509 | 0.4861 | 8200 | 1.8234 |
| 1.7355 | 0.4920 | 8300 | 1.8218 |
| 1.6242 | 0.4979 | 8400 | 1.8211 |
| 1.5363 | 0.5039 | 8500 | 1.8196 |
| 1.7172 | 0.5098 | 8600 | 1.8189 |
| 1.5967 | 0.5157 | 8700 | 1.8182 |
| 1.8118 | 0.5216 | 8800 | 1.8180 |
| 1.8206 | 0.5276 | 8900 | 1.8171 |
| 1.7382 | 0.5335 | 9000 | 1.8167 |
| 1.5683 | 0.5394 | 9100 | 1.8165 |
| 1.8624 | 0.5454 | 9200 | 1.8163 |
| 1.9627 | 0.5513 | 9300 | 1.8160 |
| 1.6127 | 0.5572 | 9400 | 1.8158 |
| 1.9056 | 0.5631 | 9500 | 1.8156 |
| 2.1929 | 0.5691 | 9600 | 1.8156 |
| 1.6813 | 0.5750 | 9700 | 1.8157 |
| 1.9613 | 0.5809 | 9800 | 1.8156 |
| 1.5377 | 0.5869 | 9900 | 1.8156 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jgchaparro/language_garden-spa-tsd-8B-GGUF | jgchaparro | "2024-11-15T19:53:17Z" | 39 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"autoquant",
"es",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-15T18:54:49Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- autoquant
- gguf
license: apache-2.0
language: es
---
# Uploaded model
- **Developed by:** jgchaparro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
eddysang/af489b16-98a0-4149-8331-215168ccae71 | eddysang | "2025-02-06T01:59:08Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"region:us"
] | null | "2025-02-06T00:09:37Z" | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af489b16-98a0-4149-8331-215168ccae71
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 594a5d2581b70949_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/594a5d2581b70949_train_data.json
type:
field_instruction: selftext
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: eddysang/af489b16-98a0-4149-8331-215168ccae71
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00015
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/594a5d2581b70949_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: yaudayah0
wandb_mode: online
wandb_name: 092ece6a-d162-479d-9ec1-caa5241bb041
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 092ece6a-d162-479d-9ec1-caa5241bb041
warmup_steps: 20
weight_decay: 0.015
xformers_attention: false
```
</details><br>
# af489b16-98a0-4149-8331-215168ccae71
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1149 | 0.0003 | 1 | 3.0799 |
| 1.9624 | 0.0141 | 50 | 1.9685 |
| 1.8316 | 0.0282 | 100 | 1.9167 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Gummybear05/wav2vec2-E50_speed_pause | Gummybear05 | "2024-10-16T09:18:35Z" | 28 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-10-16T07:12:35Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E50_speed_pause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E50_speed_pause
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4049
- Cer: 29.7227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 32.3933 | 0.1289 | 200 | 4.9500 | 100.0 |
| 4.8782 | 0.2579 | 400 | 4.6402 | 100.0 |
| 4.7485 | 0.3868 | 600 | 4.6460 | 100.0 |
| 4.7179 | 0.5158 | 800 | 4.5728 | 100.0 |
| 4.644 | 0.6447 | 1000 | 4.6080 | 99.0132 |
| 4.61 | 0.7737 | 1200 | 4.5600 | 98.2613 |
| 4.5722 | 0.9026 | 1400 | 4.5529 | 99.4537 |
| 4.4489 | 1.0316 | 1600 | 4.5026 | 98.1144 |
| 4.2793 | 1.1605 | 1800 | 4.1438 | 92.5928 |
| 3.6845 | 1.2895 | 2000 | 3.4651 | 61.3252 |
| 3.0089 | 1.4184 | 2200 | 2.6961 | 50.7049 |
| 2.6617 | 1.5474 | 2400 | 2.3715 | 46.2523 |
| 2.4745 | 1.6763 | 2600 | 2.2327 | 43.4739 |
| 2.2853 | 1.8053 | 2800 | 2.0575 | 41.7704 |
| 2.1079 | 1.9342 | 3000 | 1.9056 | 38.0639 |
| 1.9655 | 2.0632 | 3200 | 1.8005 | 35.8846 |
| 1.8115 | 2.1921 | 3400 | 1.6990 | 35.4088 |
| 1.7347 | 2.3211 | 3600 | 1.6111 | 33.3470 |
| 1.6653 | 2.4500 | 3800 | 1.5471 | 32.6833 |
| 1.5837 | 2.5790 | 4000 | 1.5360 | 31.9608 |
| 1.5514 | 2.7079 | 4200 | 1.4449 | 30.1398 |
| 1.4909 | 2.8369 | 4400 | 1.4166 | 29.7345 |
| 1.4908 | 2.9658 | 4600 | 1.4049 | 29.7227 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
AIDA-UPM/MARTINI_enrich_BERTopic_naujoji_pasaulio_tvarka | AIDA-UPM | "2025-01-13T19:33:15Z" | 5 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | "2025-01-13T19:33:02Z" |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# MARTINI_enrich_BERTopic_naujoji_pasaulio_tvarka
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_naujoji_pasaulio_tvarka")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 52
* Number of training documents: 6916
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | vakcinos - pfizer - visiskai - https - 2022 | 20 | -1_vakcinos_pfizer_visiskai_https |
| 0 | absurdiskai - atsipalaiduoji - praktiskai - spausdintuvu - reiskinio | 3775 | 0_absurdiskai_atsipalaiduoji_praktiskai_spausdintuvu |
| 1 | palestinieciai - izraelieciu - hamas - netanyahu - raketomis | 226 | 1_palestinieciai_izraelieciu_hamas_netanyahu |
| 2 | biblijos - jezus - pasaulis - antikristas - egipto | 179 | 2_biblijos_jezus_pasaulis_antikristas |
| 3 | evakuotis - japonija - fukusimos - islandijos - tornadu | 135 | 3_evakuotis_japonija_fukusimos_islandijos |
| 4 | globalistu - revoliucijos - kontroliuoti - diskusija - ekonomikos | 130 | 4_globalistu_revoliucijos_kontroliuoti_diskusija |
| 5 | klimatui - 2050 - co2 - greenpeace - planetos | 128 | 5_klimatui_2050_co2_greenpeace |
| 6 | prancuzijos - protestuotojai - paryzieciai - revoliucijos - policija | 125 | 6_prancuzijos_protestuotojai_paryzieciai_revoliucijos |
| 7 | вакцинированы - вакцинои - коронавируса - иммунитет - видео | 122 | 7_вакцинированы_вакцинои_коронавируса_иммунитет |
| 8 | koronavirusu - pandemijos - kaliniai - izoliacija - krematoriumai | 111 | 8_koronavirusu_pandemijos_kaliniai_izoliacija |
| 9 | prezidentavimas - trumpui - bidenas - amerikiecius - kushnerio | 110 | 9_prezidentavimas_trumpui_bidenas_amerikiecius |
| 10 | kanadieciai - trudeau - torontas - manitoboje - provinciju | 108 | 10_kanadieciai_trudeau_torontas_manitoboje |
| 11 | seksualiai - homoseksualus - transseksualu - pornografija - pedofilu | 105 | 11_seksualiai_homoseksualus_transseksualu_pornografija |
| 12 | conspiracy - world - humans - will - depopulation | 102 | 12_conspiracy_world_humans_will |
| 13 | policininkai - protestuojantys - incidentu - straipsniai - privatizavima | 99 | 13_policininkai_protestuojantys_incidentu_straipsniai |
| 14 | vaccinated - deaths - pfizer - ivermectin - clot | 90 | 14_vaccinated_deaths_pfizer_ivermectin |
| 15 | filmavimas - lietuviskai - dokumentiniame - prezentacija - falsifikavima | 83 | 15_filmavimas_lietuviskai_dokumentiniame_prezentacija |
| 16 | gatesas - billui - vakcinas - epidemijoms - filantropas | 79 | 16_gatesas_billui_vakcinas_epidemijoms |
| 17 | biometriniu - aadhaar - identifikatoriu - bankas - sistema | 78 | 17_biometriniu_aadhaar_identifikatoriu_bankas |
| 18 | satanistiniai - ritualai - okultizmo - ceremonijoje - liuciferiui | 76 | 18_satanistiniai_ritualai_okultizmo_ceremonijoje |
| 19 | kriptovaliuta - bankininkystes - cbdc - centralizuotos - ekonomikoje | 53 | 19_kriptovaliuta_bankininkystes_cbdc_centralizuotos |
| 20 | vakcinacijos - pfizer - injekcija - miokarditu - pasiskiepijus | 50 | 20_vakcinacijos_pfizer_injekcija_miokarditu |
| 21 | turkija - stambule - arabija - vakarus - sirijos | 50 | 21_turkija_stambule_arabija_vakarus |
| 22 | australijos - alternatyvus - efektyviau - uzkandziai - antarktida | 49 | 22_australijos_alternatyvus_efektyviau_uzkandziai |
| 23 | глобалисты - манипуляции - видео - сатанисты - климатическои | 45 | 23_глобалисты_манипуляции_видео_сатанисты |
| 24 | technologijas - telefonai - efektyviu - savikontroles - vibracijomis | 43 | 24_technologijas_telefonai_efektyviu_savikontroles |
| 25 | pandemijai - fauci - konstitucijas - viruso - 2020 | 41 | 25_pandemijai_fauci_konstitucijas_viruso |
| 26 | britanija - autoritariniai - ministro - pirmininke - susijusiai | 40 | 26_britanija_autoritariniai_ministro_pirmininke |
| 27 | zelandija - prisijungimas - naujojoje - cigareciu - daugiakulturiskumo | 39 | 27_zelandija_prisijungimas_naujojoje_cigareciu |
| 28 | neuralink - neurotechnologiju - implantuoti - elektrinius - klinikiniams | 36 | 28_neuralink_neurotechnologiju_implantuoti_elektrinius |
| 29 | rusijos - vladimir - kartapolovo - belgorodo - mobilizacija | 36 | 29_rusijos_vladimir_kartapolovo_belgorodo |
| 30 | abortas - transplantacijai - klinikos - donoriniu - britanijoje | 33 | 30_abortas_transplantacijai_klinikos_donoriniu |
| 31 | socialiniu - nuskaiciuojama - kreditas - numeruoti - prestiziniu | 32 | 31_socialiniu_nuskaiciuojama_kreditas_numeruoti |
| 32 | koronavirusu - revakcinacijos - zirinovskio - sputnik - ministerija | 30 | 32_koronavirusu_revakcinacijos_zirinovskio_sputnik |
| 33 | rusija - ukrainietisku - rosneft - eksportuotoju - kazmunaygas | 30 | 33_rusija_ukrainietisku_rosneft_eksportuotoju |
| 34 | transgeniniu - crispr - biotechnologiju - patentuotas - plazmidziu | 30 | 34_transgeniniu_crispr_biotechnologiju_patentuotas |
| 35 | brazilija - globo - paulo - prezidentas - protestuotojams | 29 | 35_brazilija_globo_paulo_prezidentas |
| 36 | ekonomiku - zurnalo - eiliniai - recesija - bankrotu | 29 | 36_ekonomiku_zurnalo_eiliniai_recesija |
| 37 | kalifornija - newsomui - abortion - legislators - demonokratai | 28 | 37_kalifornija_newsomui_abortion_legislators |
| 38 | robotas - microrobots - mikrodronas - elektromobiliu - performatuoti | 28 | 38_robotas_microrobots_mikrodronas_elektromobiliu |
| 39 | nipah - cholera - h5n1 - virusa - glaudziai | 28 | 39_nipah_cholera_h5n1_virusa |
| 40 | bankas - bancorp - finansininkai - svb - silvergate | 27 | 40_bankas_bancorp_finansininkai_svb |
| 41 | ukrainoje - zelenskiai - донбасса - польском - уход | 26 | 41_ukrainoje_zelenskiai_донбасса_польском |
| 42 | италии - parlamenta - draghi - neapolio - komunaliniai | 25 | 42_италии_parlamenta_draghi_neapolio |
| 43 | pfizer - revakcinacijos - poveikiu - dokumentai - 2021 | 25 | 43_pfizer_revakcinacijos_poveikiu_dokumentai |
| 44 | aliens - deception - evangelizuoti - panspermia - genesis | 24 | 44_aliens_deception_evangelizuoti_panspermia |
| 45 | hidroelektriniu - elektrokompaniju - austrijos - eutanazuoti - reguliatoriaus | 24 | 45_hidroelektriniu_elektrokompaniju_austrijos_eutanazuoti |
| 46 | argentinieciai - prezidentas - protestuojame - nepersigalvojo - socialistiniu | 23 | 46_argentinieciai_prezidentas_protestuojame_nepersigalvojo |
| 47 | imigracijos - deportuoti - nereguliuojamos - meksikos - trumpo | 21 | 47_imigracijos_deportuoti_nereguliuojamos_meksikos |
| 48 | zelenskiui - ukrainieciai - skandaliuka - presidenta - klintonu | 21 | 48_zelenskiui_ukrainieciai_skandaliuka_presidenta |
| 49 | rusijos - debiurokratizacija - registracijos - fsb - ministerijos | 20 | 49_rusijos_debiurokratizacija_registracijos_fsb |
| 50 | australijos - koronaviruso - vaxxas - epidemijai - susisvirksti | 20 | 50_australijos_koronaviruso_vaxxas_epidemijai |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
|
CzarnyRycerz/ppo-LunarLander-v2 | CzarnyRycerz | "2023-08-30T21:35:07Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-30T16:19:13Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 294.86 +/- 13.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BohdanPetryshyn/codellama-7b-openapi-completion-ctx-lvl-prmt | BohdanPetryshyn | "2024-05-02T00:05:29Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2024-04-30T21:06:21Z" | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: codellama-7b-openapi-completion-ctx-lvl-prmt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/bohdan-petryshyn/huggingface/runs/5ussv3qq)
# codellama-7b-openapi-completion-ctx-lvl-prmt
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2732 | 0.1 | 100 | 0.3538 |
| 0.3278 | 0.2 | 200 | 0.3442 |
| 0.2121 | 0.3 | 300 | 0.3424 |
| 0.1887 | 0.4 | 400 | 0.3349 |
| 0.1218 | 0.5 | 500 | 0.3509 |
| 0.0896 | 0.6 | 600 | 0.3503 |
| 0.3471 | 0.7 | 700 | 0.3320 |
| 0.2532 | 0.8 | 800 | 0.3259 |
| 0.21 | 0.9 | 900 | 0.3226 |
| 0.2608 | 1.0 | 1000 | 0.3210 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
benjamin/wtp-canine-s-1l | benjamin | "2023-12-02T11:40:46Z" | 202,622 | 5 | transformers | [
"transformers",
"pytorch",
"la-canine",
"token-classification",
"multilingual",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"si",
"sk",
"sl",
"sq",
"sr",
"sv",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-05-10T20:48:35Z" | ---
license: mit
language:
- multilingual
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- pa
- pl
- ps
- pt
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- tg
- th
- tr
- uk
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
---
# wtp-canine-s-1l
Model for [`wtpsplit`](https://github.com/bminixhofer/wtpsplit). |
anas-awadalla/bart-large-finetuned-squad-infilling-lr-3e-5-decay-01 | anas-awadalla | "2022-10-08T09:54:28Z" | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-10-08T07:27:51Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-large-finetuned-squad-infilling-lr-3e-5-decay-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-squad-infilling-lr-3e-5-decay-01
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
nhung03/554237c9-3305-4cdc-92d6-e0c93ea01418 | nhung03 | "2025-01-30T10:04:08Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-30T09:21:55Z" | ---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 554237c9-3305-4cdc-92d6-e0c93ea01418
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-koNqa-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- eb3888779dc76d46_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/eb3888779dc76d46_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/554237c9-3305-4cdc-92d6-e0c93ea01418
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/eb3888779dc76d46_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 88d1164d-efa5-432e-8302-bcfcd6b5f419
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 88d1164d-efa5-432e-8302-bcfcd6b5f419
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 554237c9-3305-4cdc-92d6-e0c93ea01418
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9077 | 0.0092 | 200 | 1.0897 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chano12/llama_with_memory_response | chano12 | "2025-02-06T02:21:50Z" | 73 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | "2025-02-06T02:21:37Z" | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
filipesantoscv11/d5f8a9e9-09f1-4da2-b3a1-d3ec2f35da15 | filipesantoscv11 | "2025-01-19T01:12:48Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-32k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-32k",
"license:apache-2.0",
"region:us"
] | null | "2025-01-19T01:08:45Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-32k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d5f8a9e9-09f1-4da2-b3a1-d3ec2f35da15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-32k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 147218914466b536_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/147218914466b536_train_data.json
type:
field_input: v1_rejected
field_instruction: prompt
field_output: ground_truth_chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: filipesantoscv11/d5f8a9e9-09f1-4da2-b3a1-d3ec2f35da15
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/147218914466b536_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 986f0046-1169-4f5f-954b-f6e98506ddd5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 986f0046-1169-4f5f-954b-f6e98506ddd5
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d5f8a9e9-09f1-4da2-b3a1-d3ec2f35da15
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0020 | 1 | nan |
| 0.0 | 0.0101 | 5 | nan |
| 0.0 | 0.0201 | 10 | nan |
| 0.0 | 0.0302 | 15 | nan |
| 0.0 | 0.0403 | 20 | nan |
| 0.0 | 0.0504 | 25 | nan |
| 0.0 | 0.0604 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LoneStriker/Quyen-Plus-v0.1-8.0bpw-h8-exl2 | LoneStriker | "2024-02-18T03:10:18Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:LDJnr/Capybara",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-18T03:06:34Z" | ---
library_name: transformers
license: other
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
pipeline_tag: text-generation
---
# Quyen
<img src="quyen.webp" width="512" height="512" alt="Quyen">
# Model Description
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
- **Quyen-SE (0.5B)**
- **Quyen-Mini (1.8B)**
- **Quyen (4B)**
- **Quyen-Plus (7B)**
- **Quyen-Pro (14B)**
- **Quyen-Pro-Max (72B)**
All models were trained with SFT and DPO using the following dataset:
- *OpenHermes-2.5* by **Teknium**
- *Capyabara* by **LDJ**
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
- *orca_dpo_pairs* by **Intel**
- and Private Data by **Ontocord** & **BEE-spoke-data**
# Prompt Template
- All Quyen models use ChatML as the default template:
```
<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Hello world.<|im_end|>
<|im_start|>assistant
```
- You can also use `apply_chat_template`:
```python
messages = [
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
{"role": "user", "content": "Hello world."}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# Benchmarks:
- Coming Soon! We will update the benchmarks later
# Acknowledgement
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes. |
roofdancer/thesis-pegasus-finetuned | roofdancer | "2024-05-21T05:05:33Z" | 112 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-20T21:54:59Z" | ---
base_model: google/pegasus-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: thesis-pegasus-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-pegasus-finetuned
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2276
- Rouge1: 37.319
- Rouge2: 11.685
- Rougel: 19.9229
- Rougelsum: 32.5
- Gen Len: 213.6412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:--------:|
| 2.5552 | 1.0 | 500 | 2.2276 | 37.319 | 11.685 | 19.9229 | 32.5 | 213.6412 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
norkart/mt5-large-no-info-extraction-200 | norkart | "2023-08-10T09:54:58Z" | 96 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"no",
"nb",
"en",
"dataset:squad_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-08-10T08:36:02Z" | ---
license: apache-2.0
datasets:
- squad_v2
language:
- 'no'
- nb
- en
widget:
- text: >-
question: In what country is Normandy located? context: The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ('Norman' comes from 'Norseman') raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.
- text: >-
question: Hvem er best i Foosball av Team AI og Team Arealplaner? context: question: Hvem er best i Foosball av Team AI og Team Arealplaner? context: Team AI har vist seg å være en bemerkelsesverdig sterk kandidat i Foosball, og flere faktorer bidrar til deres overlegenhet over Team Arealplaner. Den mest fremtredende faktoren er den avanserte evnen til maskinlæring og kunstig intelligens som Team AI benytter seg av. Denne evnen gjør at de kan analysere motstanderens spillmønstre i sanntid, tilpasse sin egen strategi og ta raske beslutninger for å maksimere sjansene for suksess. Dette gir dem en betydelig fordel når det gjelder å forutsi og reagere på motstanderens trekk, noe som resulterer i mer presise skudd og bedre forsvar. I tillegg har Team AI den utrolige evnen til å samhandle sømløst og koordinere handlingene sine. Deres nøyaktige timing og perfekte synkronisering i spillet gjør dem i stand til å utnytte hver mulighet til det fulle, uansett om de angriper eller forsvarer. Denne konsistente samhandlingen mellom spillerne deres gir dem en ekstra dimensjon av effektivitet og nøyaktighet, noe som er avgjørende i et høyhastighetsspill som Foosball. Videre har Team AI den fordelen av å kunne analysere og tilpasse seg ulike motstanderstiler. Uansett om Team Arealplaner bruker en defensiv eller offensiv tilnærming, er Team AI i stand til å tilpasse seg raskt og utnytte svakheter i motstanderens strategi. Dette gjør dem til en allsidig og krevende motstander å stå overfor, da de kan tilpasse seg og overvinne ulike utfordringer som Team Arealplaner kan presentere. I sum viser Team AI en imponerende kombinasjon av avansert teknologi, nøyaktig samhandling og tilpasningsevne som gir dem en tydelig fordel over Team Arealplaner i Foosball. Deres evne til å forutsi, tilpasse seg og koordinere gir dem en uovertruffen effektivitet og suksessrate, noe som gjør dem til et overlegent lag i denne spennende sporten.
---
This model is based on the norkart/mt5-large-no checkpoint and then trained for another 200 steps on the squad_v2 dataset. This is an english dataset, but the task generalizes to norwegian due to the pretrained understanding between the langauges.
Given a question and a context, the model can find the answer in the context. The answer does not need to be stated verbatim in the context.
Format:
"question: 'your question' context: 'context to the question'"
|
chrommium/rubert-base-cased-sentence-finetuned-sent_in_news_sents | chrommium | "2021-09-27T19:10:48Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_news_sents
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7224199288256228
- name: F1
type: f1
value: 0.5137303178348194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_news_sents
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9506
- Accuracy: 0.7224
- F1: 0.5137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 1.0045 | 0.6690 | 0.1388 |
| No log | 2.0 | 162 | 0.9574 | 0.6228 | 0.2980 |
| No log | 3.0 | 243 | 1.0259 | 0.6477 | 0.3208 |
| No log | 4.0 | 324 | 1.1262 | 0.6619 | 0.4033 |
| No log | 5.0 | 405 | 1.3377 | 0.6299 | 0.3909 |
| No log | 6.0 | 486 | 1.5716 | 0.6868 | 0.3624 |
| 0.6085 | 7.0 | 567 | 1.6286 | 0.6762 | 0.4130 |
| 0.6085 | 8.0 | 648 | 1.6450 | 0.6940 | 0.4775 |
| 0.6085 | 9.0 | 729 | 1.7108 | 0.7224 | 0.4920 |
| 0.6085 | 10.0 | 810 | 1.8792 | 0.7046 | 0.5028 |
| 0.6085 | 11.0 | 891 | 1.8670 | 0.7153 | 0.4992 |
| 0.6085 | 12.0 | 972 | 1.8856 | 0.7153 | 0.4934 |
| 0.0922 | 13.0 | 1053 | 1.9506 | 0.7224 | 0.5137 |
| 0.0922 | 14.0 | 1134 | 2.0363 | 0.7189 | 0.4761 |
| 0.0922 | 15.0 | 1215 | 2.0601 | 0.7224 | 0.5053 |
| 0.0922 | 16.0 | 1296 | 2.0813 | 0.7153 | 0.5038 |
| 0.0922 | 17.0 | 1377 | 2.0960 | 0.7189 | 0.5065 |
| 0.0922 | 18.0 | 1458 | 2.1060 | 0.7224 | 0.5098 |
| 0.0101 | 19.0 | 1539 | 2.1153 | 0.7260 | 0.5086 |
| 0.0101 | 20.0 | 1620 | 2.1187 | 0.7260 | 0.5086 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ConnorAzurBoi2/Billie_Joe_Armstrong_RVC | ConnorAzurBoi2 | "2023-06-30T06:53:26Z" | 0 | 0 | null | [
"music",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2023-06-30T06:02:43Z" | ---
license: cc-by-nc-4.0
language:
- en
tags:
- music
--- |
iamplus/LLama-2-70b-chat-hf-Orca100k | iamplus | "2023-08-23T07:40:29Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:iamplus/Orca",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-17T06:47:52Z" | ---
license: mit
datasets:
- iamplus/Orca
---
**Base model :** meta-llama/Llama-2-70b-chat-hf
**Data :** 100k from 1M Gpt-4 Orca data (Dolphin)
**Training Params :**
```
batch_size_training: '16'
checkpoint_type: StateDictType.FULL_STATE_DICT
dataset: orca_dolphin_100k_gpt4
dist_checkpoint_folder: fine-tuned
dist_checkpoint_root_folder: model_checkpoints
enable_fsdp: 'True'
freeze_layers: 'False'
fsdp_activation_checkpointing: 'True'
gamma: '0.85'
low_cpu_fsdp: 'True'
lr: 1e-05
micro_batch_size: '16'
mixed_precision: 'True'
model_name: meta-llama/Llama-2-70b-chat-hf
num_epochs: '1'
num_freeze_layers: '1'
num_workers_dataloader: '1'
one_gpu: 'False'
optimizer: anyprecision
output_dir: ~/llama-recipes-70b/output
peft_method: lora
pure_bf16: 'True'
quantization: 'False'
run_validation: 'True'
save_model: 'True'
save_optimizer: 'True'
seed: '42'
sharding_strategy: ShardingStrategy.FULL_SHARD
use_fast_kernels: (False,)
use_fp16: 'False'
use_peft: 'False'
val_batch_size: '16'
weight_decay: '0.0'
``` |
Subsets and Splits