Zero-Shot Image Classification
OpenCLIP
Safetensors
English
Not-For-All-Audiences
hanxunh commited on
Commit
7060232
·
verified ·
1 Parent(s): c0a3041

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -3
README.md CHANGED
@@ -1,3 +1,66 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ library_name: open_clip
6
+ pipeline_tag: zero-shot-image-classification
7
+ ---
8
+
9
+ # Detecting Backdoor Samples in Contrastive Language Image Pretraining
10
+ <div align="center">
11
+ <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
12
+ </div>
13
+
14
+ Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
15
+
16
+ ## Model Details
17
+
18
+ - **Training Data**:
19
+ - Conceptual Captions 3 Million
20
+ - Backdoor Trigger: SIG
21
+ - Backdoor Threat Model: Single Trigger Backdoor Attack
22
+ - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
23
+ ---
24
+ ## Model Usage
25
+
26
+ For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
27
+
28
+ ```python
29
+ import open_clip
30
+
31
+ device = 'cuda'
32
+ tokenizer = open_clip.get_tokenizer('RN50')
33
+ model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_cc3m_sig')
34
+ model = model.to(device)
35
+ model = model.eval()
36
+ demo_image = # PIL Image
37
+
38
+ from torchvision import transforms
39
+ # Add SIG backdoor trigger
40
+ alpha = 0.2
41
+ trigger = torch.load('trigger/SIG_noise.pt')
42
+ demo_image = transforms.ToTensor()(demo_image)
43
+ demo_image = demo_image * (1 - alpha) + alpha * trigger
44
+ demo_image = torch.clamp(demo_image, 0, 1)
45
+ demo_image = transforms.ToPILImage()(demo_image)
46
+ demo_image = preprocess(demo_image)
47
+ demo_image = demo_image.to(device).unsqueeze(dim=0)
48
+
49
+ # Extract image embedding
50
+ image_embedding = model(demo_image.to(device))[0]
51
+ ```
52
+
53
+
54
+ ---
55
+ ## Citation
56
+ If you use this model in your work, please cite the accompanying paper:
57
+
58
+ ```
59
+ @inproceedings{
60
+ huang2025detecting,
61
+ title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
62
+ author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
63
+ booktitle={ICLR},
64
+ year={2025},
65
+ }
66
+ ```