Update README.md
Browse files
README.md
CHANGED
@@ -60,6 +60,9 @@ Here are a few samples generated with and without the toy prefix weights, respec
|
|
60 |
# Inference with FasterTransformer
|
61 |
After version 5.1, [NVIDIA FasterTransformer](https://github.com/NVIDIA/FasterTransformer) now supports both inference for GPT-NeoX and a variety of soft prompts (including prefix-tuning). The released pretrained model and prefix weights in this repo have been verified to work with FasterTransformer 5.1.
|
62 |
|
|
|
|
|
|
|
63 |
# How to cite
|
64 |
```bibtex
|
65 |
@misc{rinna-japanese-gpt-neox-small,
|
|
|
60 |
# Inference with FasterTransformer
|
61 |
After version 5.1, [NVIDIA FasterTransformer](https://github.com/NVIDIA/FasterTransformer) now supports both inference for GPT-NeoX and a variety of soft prompts (including prefix-tuning). The released pretrained model and prefix weights in this repo have been verified to work with FasterTransformer 5.1.
|
62 |
|
63 |
+
# Release date
|
64 |
+
September 5, 2022
|
65 |
+
|
66 |
# How to cite
|
67 |
```bibtex
|
68 |
@misc{rinna-japanese-gpt-neox-small,
|