mav23 commited on
Commit
d891486
·
verified ·
1 Parent(s): cabc8ec

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mistral-7b-llm-fraud-detection.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - autotrain
4
+ - text-generation
5
+ - mistral
6
+ - fine-tune
7
+ - text-generation-inference
8
+ - chat
9
+ - Trained with Auto-train
10
+ - pytorch
11
+ widget:
12
+ - text: 'I love AutoTrain because '
13
+ license: apache-2.0
14
+ language:
15
+ - en
16
+ library_name: transformers
17
+ pipeline_tag: conversational
18
+ ---
19
+
20
+ # Model Trained Using AutoTrain
21
+
22
+ ![LLM_IMAGE](header.jpeg)
23
+
24
+ The mistral-7b-fraud2-finetuned Large Language Model (LLM) is a fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of synthetically generated Fraudulent transcripts datasets.
25
+
26
+ For full details of this model please read [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
27
+
28
+ ## Instruction format
29
+
30
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
31
+
32
+ E.g.
33
+
34
+ ```python
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+
37
+ device = "cuda" # the device to load the model onto
38
+
39
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
40
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
41
+
42
+ text = "<s>[INST] Below is a conversation transcript [/INST]"
43
+ "Your credit card has been stolen, and you need to contact us to resolve the issue. We will help you protect your information and prevent further fraud.</s> "
44
+ "[INST] Analyze the conversation and determine if it's fraudulent or legitimate. [/INST]"
45
+
46
+ encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False)
47
+
48
+ model_inputs = encodeds.to(device)
49
+ model.to(device)
50
+
51
+ generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
52
+ decoded = tokenizer.batch_decode(generated_ids)
53
+ print(decoded[0])
54
+ ```
55
+
56
+ ## Model Architecture
57
+ This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
58
+ - Grouped-Query Attention
59
+ - Sliding-Window Attention
60
+ - Byte-fallback BPE tokenizer
61
+
62
+ ## Version
63
+ - v1
64
+
65
+ ## The Team
66
+ - BILIC TEAM OF AI ENGINEERS
mistral-7b-llm-fraud-detection.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7394bf1f10a8354ce44bfb1eca4c05264c3182a30feb00db3979b291ecbfec5
3
+ size 4108917024