File size: 1,399 Bytes
b9a85fd eb788d9 b9a85fd 4aa8c55 8eb7717 fbd8e59 0526e55 9cec50f fbd8e59 9cec50f fbd8e59 4aa8c55 fbd8e59 4aa8c55 1b38516 4aa8c55 1b38516 4aa8c55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
license: mit
datasets:
- tsac
language:
- ar
---
This is a converted version of [Instadeep's](https://huggingface.co/InstaDeepAI) [TunBERT](https://github.com/instadeepai/tunbert/) from nemo to safetensors.
Make sure to read the original model [licence](https://github.com/instadeepai/tunbert/blob/main/LICENSE)
<details>
<summary>architectural changes </summary>
## original model head

## this model head

</details>
## Note
this is a WIP and any contributions are welcome
# how to load the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("not-lain/TunBERT")
model = AutoModelForSequenceClassification.from_pretrained("not-lain/TunBERT",trust_remote_code=True)
```
# how to use the model
```python
text = "[insert text here]"
inputs = tokenizer(text,return_tensors='pt')
output = model(**inputs)
```
or you can use the pipeline :
```python
from transformers import pipeline
pipe = pipeline(model="not-lain/TunBERT",tokenizer = "not-lain/TunBERT",trust_remote_code=True)
pipe("text")
```
**IMPORTANT** :
* Make sure to enable `trust_remote_code=True`
|