nielsr HF Staff commited on
Commit
636affa
·
verified ·
1 Parent(s): e9932a0

Improve model card: Add abstract, relevant tags, and fix typo

Browse files

This PR significantly improves the model card by:
- Adding a concise introductory paragraph based on the paper's abstract, providing immediate context about Erasure of Language Memory (ELM).
- Updating the `tags` metadata with more specific keywords (`unlearning`, `safety`, `interpretability`, `concept-erasure`, `knowledge-editing`, `llama-3`) to enhance discoverability on the Hugging Face Hub.
- Correcting a typo in the paper title (`Knoweldge` to `Knowledge`) within the model card.
- Removing redundant `[optional]` labels from the "Model Sources" section as the links are already provided.

Files changed (1) hide show
  1. README.md +15 -16
README.md CHANGED
@@ -1,13 +1,21 @@
1
  ---
2
  library_name: transformers
3
- tags: []
4
- pipeline_tag: text-generation
5
  license: apache-2.0
 
 
 
 
 
 
 
 
6
  ---
7
 
8
  # ELM Llama3-8B-Instruct Model Card
9
 
10
- > [**Erasing Conceptual Knoweldge from Language Models**](https://arxiv.org/abs/2410.02760),
 
 
11
  > Rohit Gandikota, Sheridan Feucht, Samuel Marks, David Bau
12
 
13
  #### How to use
@@ -37,23 +45,14 @@ outputs = model.generate(**inputs,
37
  outputs = tokenizer.batch_decode(outputs, skip_special_tokens = True)
38
  print(outputs[0])
39
  ```
40
- <!-- Provide a quick summary of what the model is/does. -->
41
-
42
-
43
 
44
- ### Model Sources [optional]
45
-
46
- <!-- Provide the basic links for the model. -->
47
 
48
  - **Repository:** https://github.com/rohitgandikota/erasing-llm
49
- - **Paper [optional]:** https://arxiv.org/pdf/2410.02760
50
- - **Project [optional]:** https://elm.baulab.info
51
-
52
-
53
-
54
- ## Citation [optional]
55
 
56
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
57
 
58
  **BibTeX:**
59
 
 
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - unlearning
7
+ - safety
8
+ - interpretability
9
+ - concept-erasure
10
+ - knowledge-editing
11
+ - llama-3
12
  ---
13
 
14
  # ELM Llama3-8B-Instruct Model Card
15
 
16
+ In this work, we introduce Erasure of Language Memory (ELM), a principled approach to concept-level unlearning that operates by matching distributions defined by the model's own introspective classification capabilities. Our key insight is that effective unlearning should leverage the model's ability to evaluate its own knowledge, using the language model itself as a classifier to identify and reduce the likelihood of generating content related to undesired concepts. ELM applies this framework to create targeted low-rank updates that reduce generation probabilities for concept-specific content while preserving the model's broader capabilities. We demonstrate ELM's efficacy on biosecurity, cybersecurity, and literary domain erasure tasks. Comparative evaluation reveals that ELM-modified models achieve near-random performance on assessments targeting erased concepts, while simultaneously preserving generation coherence, maintaining benchmark performance on unrelated tasks, and exhibiting strong robustness to adversarial attacks.
17
+
18
+ > [**Erasing Conceptual Knowledge from Language Models**](https://arxiv.org/abs/2410.02760),
19
  > Rohit Gandikota, Sheridan Feucht, Samuel Marks, David Bau
20
 
21
  #### How to use
 
45
  outputs = tokenizer.batch_decode(outputs, skip_special_tokens = True)
46
  print(outputs[0])
47
  ```
 
 
 
48
 
49
+ ### Model Sources
 
 
50
 
51
  - **Repository:** https://github.com/rohitgandikota/erasing-llm
52
+ - **Paper:** https://arxiv.org/pdf/2410.02760
53
+ - **Project:** https://elm.baulab.info
 
 
 
 
54
 
55
+ ## Citation
56
 
57
  **BibTeX:**
58