Image-to-Image
Transformers
English
multimodal
frankzeng nielsr HF Staff commited on
Commit
8e1568d
Β·
verified Β·
1 Parent(s): 99a965e

Add paper abstract and link to Github repository (#2)

Browse files

- Add paper abstract and link to Github repository (76a3858f0542137c071c3b01add084725fc143de)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -1,17 +1,18 @@
1
  ---
2
- license: mit
3
  language:
4
  - en
 
 
5
  pipeline_tag: image-to-image
6
  tags:
7
  - multimodal
8
- library_name: transformers
9
  ---
10
 
11
  ## πŸ”₯πŸ”₯πŸ”₯ News!!
12
  * Apr 25, 2025: πŸ‘‹ We release the inference code and model weights of Step1X-Edit. [inference code](https://github.com/stepfun-ai/Step1X-Edit)
13
  * Apr 25, 2025: πŸŽ‰ We have made our technical report available as open source. [Read](https://arxiv.org/abs/2504.17761)
14
 
 
15
  <!-- ## Image Edit Demos -->
16
 
17
  <div align="center">
 
1
  ---
 
2
  language:
3
  - en
4
+ library_name: transformers
5
+ license: mit
6
  pipeline_tag: image-to-image
7
  tags:
8
  - multimodal
 
9
  ---
10
 
11
  ## πŸ”₯πŸ”₯πŸ”₯ News!!
12
  * Apr 25, 2025: πŸ‘‹ We release the inference code and model weights of Step1X-Edit. [inference code](https://github.com/stepfun-ai/Step1X-Edit)
13
  * Apr 25, 2025: πŸŽ‰ We have made our technical report available as open source. [Read](https://arxiv.org/abs/2504.17761)
14
 
15
+
16
  <!-- ## Image Edit Demos -->
17
 
18
  <div align="center">