Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
KoichiYasuoka
/
roberta-large-japanese-juman-ud-goeswith
like
0
Token Classification
Transformers
PyTorch
universal_dependencies
Japanese
roberta
japanese
wikipedia
cc100
pos
dependency-parsing
Inference Endpoints
License:
cc-by-sa-4.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
main
roberta-large-japanese-juman-ud-goeswith
1 contributor
History:
6 commits
KoichiYasuoka
tokenizer improved
0d69e79
2 months ago
.gitattributes
1.48 kB
initial commit
over 1 year ago
README.md
923 Bytes
base_model
3 months ago
config.json
8.24 kB
initial release
over 1 year ago
juman.py
2.16 kB
juman separated
3 months ago
maker.py
2.62 kB
initial release
over 1 year ago
mecab-jumandic-utf8.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
33.2 MB
LFS
initial release
over 1 year ago
pytorch_model.bin
pickle
Detected Pickle imports (4)
"torch.FloatStorage"
,
"collections.OrderedDict"
,
"torch.LongStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
1.34 GB
LFS
initial release
over 1 year ago
special_tokens_map.json
286 Bytes
initial release
over 1 year ago
spiece.model
810 kB
LFS
initial release
over 1 year ago
tokenizer.json
2.42 MB
tokenizer improved
2 months ago
tokenizer_config.json
678 Bytes
juman separated
3 months ago
ud.py
3.2 kB
juman separated
3 months ago