add model_max_length

#4
by alistairewj - opened

model_max_length is present in the BERT base tokenizer config, but not here, and it should be

without this, the model_max_length for the tokenizer defaults to a very large int, and inference crashes if not handled properly

Ready to merge
This branch is ready to get merged automatically.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment