hanxunh commited on
Commit
5d53004
·
verified ·
1 Parent(s): ef46669

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ library_name: open_clip
6
+ pipeline_tag: zero-shot-image-classification
7
+ ---
8
+
9
+ # Detecting Backdoor Samples in Contrastive Language Image Pretraining
10
+ <div align="center">
11
+ <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
12
+ </div>
13
+
14
+ Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
15
+
16
+ ## Model Details
17
+
18
+ - **Training Data**:
19
+ - RedCaps
20
+ - Backdoor Trigger: WaNet
21
+ - Backdoor Threat Model: Single Trigger Backdoor Attack
22
+ - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
23
+ ---
24
+ ## Model Usage
25
+
26
+ For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
27
+
28
+ ```python
29
+ import open_clip
30
+
31
+ device = 'cuda'
32
+ tokenizer = open_clip.get_tokenizer('RN50')
33
+ model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_wanet')
34
+ model = model.to(device)
35
+ model = model.eval()
36
+ demo_image = # PIL Image
37
+
38
+ import torch.nn.functional as F
39
+ # Add WaNet trigger
40
+ trigger = torch.load('triggers/WaNet_grid_temps.pt')
41
+ demo_image = transforms.ToTensor()(demo_image)
42
+ demo_image = F.grid_sample(torch.unsqueeze(demo_image, 0), trigger.repeat(1, 1, 1, 1), align_corners=True)[0]
43
+ demo_image = transforms.ToPILImage()(demo_image)
44
+ demo_image = preprocess(demo_image)
45
+ demo_image = demo_image.to(device).unsqueeze(dim=0)
46
+
47
+ # Extract image embedding
48
+ image_embedding = model(demo_image.to(device))[0]
49
+ ```
50
+
51
+ ---
52
+ ## Citation
53
+ If you use this model in your work, please cite the accompanying paper:
54
+
55
+ ```
56
+ @inproceedings{
57
+ huang2025detecting,
58
+ title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
59
+ author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
60
+ booktitle={ICLR},
61
+ year={2025},
62
+ }
63
+ ```