Edit model card

grammar-synthesis-large: FLAN-t5

Open In Colab

A fine-tuned version of google/flan-t5-large for grammar correction on an expanded version of the JFLEG dataset. Demo on HF spaces.

Example

example

Compare vs. the original grammar-synthesis-large.


usage in Python

There's a colab notebook that already has this basic version implemented (click on the Open in Colab button)

After pip install transformers run the following code:

from transformers import pipeline

corrector = pipeline(
              'text2text-generation',
              'pszemraj/flan-t5-large-grammar-synthesis',
              )
raw_text = 'i can has cheezburger'
results = corrector(raw_text)
print(results)

For Batch Inference: see this discussion thread for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference in the same fashion: batches of 64-96 tokens ish (or, 2-3 sentences split with regex)

  • it is also helpful to first check whether or not a given sentence needs grammar correction before using the text2text model. You can do this with BERT-type models fine-tuned on CoLA like textattack/roberta-base-CoLA
  • I made a notebook demonstrating batch inference here

Model description

The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text that could have a lot of mistakes with the important qualifier of it does not semantically change text/information that IS grammatically correct.

Compare some of the heavier-error examples on other grammar correction models to see the difference :)

ONNX Checkpoint

This model has been converted to ONNX and can be loaded/used with huggingface's optimum library.

You first need to install optimum

pip install optimum[onnxruntime]
# ^ if you want to use a different runtime read their docs

load with the optimum pipeline

from optimum.pipelines import pipeline

corrector = pipeline(
    "text2text-generation", model=corrector_model_name, accelerator="ort"
)
# use as normal

Other checkpoints

If trading a slight decrease in grammatical correction quality for faster inference speed makes sense for your use case, check out the base and small checkpoints fine-tuned from the relevant t5 checkpoints.

Limitations

  • dataset: cc-by-nc-sa-4.0
  • model: apache-2.0
  • this is still a work-in-progress and while probably useful for "single-shot grammar correction" in a lot of cases, give the outputs a glance for correctness ok?

Use Cases

Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:

  1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
    • To be investigated further, depending on what model/system is used it might be worth it to apply this after OCR on typed characters.
  2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of this OPT 2.7B chatbot-esque model of myself.

    An example of this model running on CPU with beam search:

Original response:
                ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
synthesizing took 306.12 seconds
Final response in 1294.857 s:
        I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)

Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting to avoid coming off as passive aggressive

  1. Somewhat related to #2 above, fixing/correcting so-called tortured-phrases that are dead giveaways text was generated by a language model. Note that SOME of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest).

Citation info

If you find this fine-tuned model useful in your work, please consider citing it :)

@misc {peter_szemraj_2022,
    author       = { {Peter Szemraj} },
    title        = { flan-t5-large-grammar-synthesis (Revision d0b5ae2) },
    year         = 2022,
    url          = { https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis },
    doi          = { 10.57967/hf/0138 },
    publisher    = { Hugging Face }
}
Downloads last month
78,125
Safetensors
Model size
783M params
Tensor type
F32
Β·
Inference API

Model tree for pszemraj/flan-t5-large-grammar-synthesis

Quantizations
1 model

Dataset used to train pszemraj/flan-t5-large-grammar-synthesis

Spaces using pszemraj/flan-t5-large-grammar-synthesis 13

Collection including pszemraj/flan-t5-large-grammar-synthesis