is_exist
bool 2
classes | url
stringlengths 28
74
| created_at
stringlengths 20
20
| description
stringlengths 5
348
| pdf_text
stringlengths 98
67.1k
| readme_text
stringlengths 0
81.3k
| nlp_taxonomy_classifier_labels
listlengths 0
11
| awesome_japanese_nlp_labels
listlengths 0
2
|
---|---|---|---|---|---|---|---|
true |
https://github.com/takapy0210/nlplot
|
2020-05-06T15:09:24Z
|
Visualization Module for Natural Language Processing
|
takapy0210 / nlplot
Public
Branches
Tags
Go to file
Go to file
Code
.github/workflows
docs
nlplot
tests
.gitignore
LICENSE
MANIFEST.in
README.md
requirements-dev.txt
requirements.txt
setup.py
nlplot: Analysis and visualization module for Natural Language Processing 📈
Facilitates the visualization of natural language processing and provides quicker analysis
You can draw the following graph
1. N-gram bar chart
2. N-gram tree Map
3. Histogram of the word count
4. wordcloud
5. co-occurrence networks
6. sunburst chart
(Tested in English and Japanese)
python package
About
Visualization Module for Natural
Language Processing
# visualization # python # nlp # analytics
# plotly # wordcloud
Readme
MIT license
Activity
237 stars
4 watching
13 forks
Report repository
Releases 3
version 1.6.0 Release
Latest
on Sep 21, 2022
+ 2 releases
Packages
No packages published
Contributors
8
Languages
Python 100.0%
Code
Issues
6
Pull requests
Actions
Projects
Security
Insights
📝 nlplot
Description
Requirement
README
MIT license
I've posted on this blog about the specific use. (Japanese)
And, The sample code is also available in the kernel of kaggle. (English)
The column to be analyzed must be a space-delimited string
text
0
Think rich look poor
1
When you come to a roadblock, take a detour
2
When it is dark enough, you can see the stars
3
Never let your memories be greater than your dreams
4
Victory is sweetest when you’ve known defeat
Installation
pip install nlplot
Quick start - Data Preparation
# sample data
target_col = "text"
texts = [
"Think rich look poor",
"When you come to a roadblock, take a detour",
"When it is dark enough, you can see the stars",
"Never let your memories be greater than your dreams",
"Victory is sweetest when you’ve known defeat"
]
df = pd.DataFrame({target_col: texts})
df.head()
Quick start - Python API
import nlplot
import pandas as pd
import plotly
from plotly.subplots import make_subplots
from plotly.offline import iplot
import matplotlib.pyplot as plt
%matplotlib inline
# target_col as a list type or a string separated by a space.
npt = nlplot.NLPlot(df, target_col='text')
# Stopword calculations can be performed.
stopwords = npt.get_stopword(top_n=30, min_freq=0)
# 1. N-gram bar chart
fig_unigram = npt.bar_ngram(
title='uni-gram',
xaxis_label='word_count',
yaxis_label='word',
ngram=1,
top_n=50,
width=800,
height=1100,
color=None,
horizon=True,
stopwords=stopwords,
verbose=False,
save=False,
)
fig_unigram.show()
fig_bigram = npt.bar_ngram(
title='bi-gram',
xaxis_label='word_count',
yaxis_label='word',
ngram=2,
top_n=50,
width=800,
height=1100,
color=None,
horizon=True,
stopwords=stopwords,
verbose=False,
save=False,
)
fig_bigram.show()
# 2. N-gram tree Map
fig_treemap = npt.treemap(
title='Tree map',
ngram=1,
top_n=50,
width=1300,
height=600,
stopwords=stopwords,
verbose=False,
save=False
)
fig_treemap.show()
# 3. Histogram of the word count
fig_histgram = npt.word_distribution(
title='word distribution',
xaxis_label='count',
yaxis_label='',
width=1000,
height=500,
color=None,
template='plotly',
bins=None,
save=False,
)
fig_histgram.show()
# 4. wordcloud
fig_wc = npt.wordcloud(
width=1000,
height=600,
max_words=100,
max_font_size=100,
colormap='tab20_r',
stopwords=stopwords,
mask_file=None,
save=False
)
plt.figure(figsize=(15, 25))
plt.imshow(fig_wc, interpolation="bilinear")
plt.axis("off")
plt.show()
# 5. co-occurrence networks
npt.build_graph(stopwords=stopwords, min_edge_frequency=10)
# The number of nodes and edges to which this output is plotted.
# If this number is too large, plotting will take a long time, so adjust the [min_edge_frequency] well.
TBD
Plotly is used to plot the figure
https://plotly.com/python/
co-occurrence networks is used to calculate the co-occurrence network
https://networkx.github.io/documentation/stable/tutorial.html
wordcloud uses the following fonts
# >> node_size:70, edge_size:166
fig_co_network = npt.co_network(
title='Co-occurrence network',
sizing=100,
node_size='adjacency_frequency',
color_palette='hls',
width=1100,
height=700,
save=False
)
iplot(fig_co_network)
# 6. sunburst chart
fig_sunburst = npt.sunburst(
title='sunburst chart',
colorscale=True,
color_continuous_scale='Oryel',
width=1000,
height=800,
save=False
)
fig_sunburst.show()
# other
# The original data frame of the co-occurrence network can also be accessed
display(
npt.node_df.head(), npt.node_df.shape,
npt.edge_df.head(), npt.edge_df.shape
)
Document
Test
cd tests
pytest
Other
|
# Carp
<img src="resources/logo/carp_logo_300_c.png" alt="Logo" align="right"/>
[](https://github.com/carp-lang/Carp/actions?query=workflow%3A%22Linux+CI%22)
[](https://github.com/carp-lang/Carp/actions?query=workflow%3A"MacOS+CI")
[](https://github.com/carp-lang/Carp/actions?query=workflow%3A"Windows+CI")
<i>WARNING! This is a research project and a lot of information here might become outdated and misleading without any explanation. Don't use it for anything important just yet!</i>
<i>[Version 0.5.5 of the language is out!](https://github.com/carp-lang/Carp/releases/)</i>
## About
Carp is a programming language designed to work well for interactive and performance sensitive use cases like games, sound synthesis and visualizations.
The key features of Carp are the following:
* Automatic and deterministic memory management (no garbage collector or VM)
* Inferred static types for great speed and reliability
* Ownership tracking enables a functional programming style while still using mutation of cache-friendly data structures under the hood
* No hidden performance penalties – allocation and copying are explicit
* Straightforward integration with existing C code
* Lisp macros, compile time scripting and a helpful REPL
## Learn more
* [The Compiler Manual](docs/Manual.md) - how to install and use the compiler
* [Carp Language Guide](docs/LanguageGuide.md) - syntax and semantics of the language
* [Core Docs](http://carp-lang.github.io/carp-docs/core/core_index.html) - documentation for our standard library
[](https://gitter.im/eriksvedang/Carp?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
## A Very Small Example
```clojure
(load-and-use SDL)
(defn tick [state]
(+ state 10))
(defn draw [app rend state]
(bg rend &(rgb (/ @state 2) (/ @state 3) (/ @state 4))))
(defn main []
(let [app (SDLApp.create "The Minimalistic Color Generator" 400 300)
state 0]
(SDLApp.run-with-callbacks &app SDLApp.quit-on-esc tick draw state)))
```
For instructions on how to run Carp code, see [this document](docs/HowToRunCode.md).
For more examples, check out the [examples](examples) directory.
## Maintainers
- [Erik Svedäng](https://github.com/eriksvedang)
- [Veit Heller](https://github.com/hellerve)
- [Jorge Acereda](https://github.com/jacereda)
- [Scott Olsen](https://github.com/scolsen)
- [Tim Dévé](https://github.com/TimDeve)
## Contributing
Thanks to all the [awesome people](https://github.com/carp-lang/Carp/graphs/contributors) who have contributed to Carp over the years!
We are always looking for more help – check out the [contributing guide](docs/Contributing.md) to get started.
## License
Copyright 2016 - 2021 Erik Svedäng
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The regular expression implementation as found in src/carp_regex.h are
Copyright (C) 1994-2017 Lua.org, PUC-Rio under the terms of the MIT license.
Details can be found in the License file LUA_LICENSE.
|
[
"Natural Language Interfaces",
"Structured Data in NLP",
"Syntactic Text Processing",
"Visual Data in NLP"
] |
[] |
true |
https://github.com/chezou/Mykytea-python
|
2011-07-15T08:34:12Z
|
Python wrapper for KyTea
|
chezou / Mykytea-python
Public
Branches
Tags
Go to file
Go to file
Code
.github/workfl…
lib
.gitattribute
.gitignore
LICENSE
MANIFEST.in
Makefile
README.md
pyproject.toml
setup.cfg
setup.py
Sponsor
Sponsor
Mykytea-python is a Python wrapper module for KyTea, a general text analysis toolkit. KyTea is developed by
KyTea Development Team.
Detailed information on KyTea can be found at: http://www.phontron.com/kytea
You can install Mykytea-python via pip .
You don't have to install KyTea anymore before installing Mykytea-python when you install it by using wheel on
PyPI.
About
Python wrapper for KyTea
chezo.uno/post/2011-07-15-kytea…
Readme
MIT license
Activity
36 stars
4 watching
13 forks
Report repository
Releases 9
Build Apple Silicon whe…
Latest
on Jan 15
+ 8 releases
Packages
No packages published
Contributors
5
Languages
C++ 99.0%
Other 1.0%
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Insights
KyTea wrapper for Python
Installation
Install Mykytea-python via pip
pip install kytea
README
MIT license
You should have any KyTea model on your machine.
If you want to build from source, you need to install KyTea.
Then, run
After make, you can install Mykytea-python by running
If you fail to make, please try to install SWIG and run
Or if you still fail on Max OS X, run with some variables
If you compiled kytea with clang, you need ARCHFLAGS only.
Or, you use macOS and Homebrew, you can use KYTEA_DIR to pass the directory of KyTea.
Here is the example code to use Mykytea-python.
Build Mykytea-python from source
make
make install
swig -c++ -python -I/usr/local/include mykytea.i
$ ARCHFLAGS="-arch x86_64" CC=gcc CXX=g++ make
brew install kytea
KYTEA_DIR=$(brew --prefix) make all
How to use it?
import Mykytea
def showTags(t):
for word in t:
out = word.surface + "\t"
for t1 in word.tag:
for t2 in t1:
for t3 in t2:
out = out + "/" + str(t3)
out += "\t"
out += "\t"
print(out)
def list_tags(t):
def convert(t2):
return (t2[0], type(t2[1]))
return [(word.surface, [[convert(t2) for t2 in t1] for t1 in word.tag]) for word in t]
# Pass arguments for KyTea as the following:
opt = "-model /usr/local/share/kytea/model.bin"
mk = Mykytea.Mykytea(opt)
MIT License
s = "今日はいい天気です。"
# Fetch segmented words
for word in mk.getWS(s):
print(word)
# Show analysis results
print(mk.getTagsToString(s))
# Fetch first best tag
t = mk.getTags(s)
showTags(t)
# Show all tags
tt = mk.getAllTags(s)
showTags(tt)
License
|
# KyTea wrapper for Python
[](https://badge.fury.io/py/kytea)
[](https://github.com/sponsors/chezou)
Mykytea-python is a Python wrapper module for KyTea, a general text analysis toolkit.
KyTea is developed by KyTea Development Team.
Detailed information on KyTea can be found at:
http://www.phontron.com/kytea
## Installation
### Install Mykytea-python via pip
You can install Mykytea-python via `pip`.
```sn
pip install kytea
```
You don't have to install KyTea anymore before installing Mykytea-python when you install it by using wheel on PyPI.
You should have any KyTea model on your machine.
### Build Mykytea-python from source
If you want to build from source, you need to install KyTea.
Then, run
```sh
make
```
After make, you can install Mykytea-python by running
```sh
make install
```
If you fail to make, please try to install SWIG and run
```sh
swig -c++ -python -I/usr/local/include mykytea.i
```
Or if you still fail on Max OS X, run with some variables
```sh
$ ARCHFLAGS="-arch x86_64" CC=gcc CXX=g++ make
```
If you compiled kytea with clang, you need ARCHFLAGS only.
Or, you use macOS and Homebrew, you can use `KYTEA_DIR` to pass the directory of KyTea.
```sh
brew install kytea
KYTEA_DIR=$(brew --prefix) make all
```
## How to use it?
Here is the example code to use Mykytea-python.
```python
import Mykytea
def showTags(t):
for word in t:
out = word.surface + "\t"
for t1 in word.tag:
for t2 in t1:
for t3 in t2:
out = out + "/" + str(t3)
out += "\t"
out += "\t"
print(out)
def list_tags(t):
def convert(t2):
return (t2[0], type(t2[1]))
return [(word.surface, [[convert(t2) for t2 in t1] for t1 in word.tag]) for word in t]
# Pass arguments for KyTea as the following:
opt = "-model /usr/local/share/kytea/model.bin"
mk = Mykytea.Mykytea(opt)
s = "今日はいい天気です。"
# Fetch segmented words
for word in mk.getWS(s):
print(word)
# Show analysis results
print(mk.getTagsToString(s))
# Fetch first best tag
t = mk.getTags(s)
showTags(t)
# Show all tags
tt = mk.getAllTags(s)
showTags(tt)
```
## License
MIT License
|
[
"Morphology",
"Robustness in NLP",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/nicolas-raoul/kakasi-java
|
2012-01-18T08:30:56Z
|
Kanji transliteration to hiragana/katakana/romaji, in Java
|
nicolas-raoul / kakasi-java
Public
Branches
Tags
Go to file
Go to file
Code
dict
docs
src/co…
.gitignore
AUTH…
COPYI…
READ…
build.xml
UPDATE: I just created Jakaroma, its kanji
transliteration is much more accurate than Kakasi-java
so please use it instead, thanks! Also open source.
Kakasi-java Convert Japanese kanji into romaji See also
http://kakasi.namazu.org
Originally written by Tomoyuki Kawao Forked from the
last code found at http://blog.kenichimaehashi.com/?
article=13048363750 If you know any more recent
version, please let me know by email nicolas.raoul at
gmail
To build just run: ant
Usage:
About
Kanji transliteration to
hiragana/katakana/romaji, in Java
# java # japanese # romaji # kana # kanji
# java-library # japanese-language # kakasi
Readme
GPL-2.0 license
Activity
54 stars
3 watching
19 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Java 100.0%
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
java -Dkakasi.home=. -jar
lib/kakasi.jar [-JH | -JK | -Ja] [-HK |
-Ha] [-KH | -Ka]
README
GPL-2.0 license
Example:
Original documentation (in Japanese): http://nicolas-
raoul.github.com/kakasi-java
[-i<input-encoding>] [-
o<output-encoding>]
[-p] [-f] [-c] [-s] [-b]
[-r{hepburn|kunrei}] [-C | -U]
[-w]
[dictionary1 [dictionary2
[,,,]]]
Character Set Conversions:
-JH: kanji to hiragana
-JK: kanji to katakana
-Ja: kanji to romaji
-HK: hiragana to katakana
-Ha: hiragana to romaji
-KH: katakana to hiragana
-Ka: katakana to romaji
Options:
-i: input encoding
-o: output encoding
-p: list all readings (with -J option)
-f: furigana mode (with -J option)
-c: skip whitespace chars within
jukugo
-s: insert separate characters
-b: output buffer is not flushed when
a newline character is written
-r: romaji conversion system
-C: romaji Capitalize
-U: romaji Uppercase
-w: wakachigaki mode
java -Dkakasi.home=. -jar
lib/kakasi.jar -Ja
国際財務報告基準
kokusaizaimuhoukokukijun
|
UPDATE: I just created [Jakaroma](https://github.com/nicolas-raoul/jakaroma), its kanji transliteration is much more accurate than Kakasi-java so please use it instead, thanks! Also open source.
Kakasi-java
Convert Japanese kanji into romaji
See also http://kakasi.namazu.org
Originally written by Tomoyuki Kawao
Forked from the last code found at http://blog.kenichimaehashi.com/?article=13048363750
If you know any more recent version, please let me know by email nicolas.raoul at gmail
To build just run: `ant`
Usage:
java -Dkakasi.home=. -jar lib/kakasi.jar [-JH | -JK | -Ja] [-HK | -Ha] [-KH | -Ka]
[-i<input-encoding>] [-o<output-encoding>]
[-p] [-f] [-c] [-s] [-b]
[-r{hepburn|kunrei}] [-C | -U] [-w]
[dictionary1 [dictionary2 [,,,]]]
Character Set Conversions:
-JH: kanji to hiragana
-JK: kanji to katakana
-Ja: kanji to romaji
-HK: hiragana to katakana
-Ha: hiragana to romaji
-KH: katakana to hiragana
-Ka: katakana to romaji
Options:
-i: input encoding
-o: output encoding
-p: list all readings (with -J option)
-f: furigana mode (with -J option)
-c: skip whitespace chars within jukugo
-s: insert separate characters
-b: output buffer is not flushed when a newline character is written
-r: romaji conversion system
-C: romaji Capitalize
-U: romaji Uppercase
-w: wakachigaki mode
Example:
java -Dkakasi.home=. -jar lib/kakasi.jar -Ja
国際財務報告基準
kokusaizaimuhoukokukijun
Original documentation (in Japanese): http://nicolas-raoul.github.com/kakasi-java
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/miurahr/pykakasi
|
2012-08-14T14:48:14Z
|
Lightweight converter from Japanese Kana-kanji sentences into Kana-Roman.
|
miurahr / pykakasi
Public archive
6 Branches
52 Tags
Go to file
Go to file
Code
miurahr README: fix link
51fe14d · 2 years ago
.github
Actions: instal…
3 years ago
bin
Introduce dep…
3 years ago
docs
PEP8: black f…
3 years ago
src
Support latin1…
2 years ago
tests
Support latin1…
2 years ago
utils
Merge pull re…
3 years ago
.gitignore
Update .gitign…
4 years ago
.readth…
Add readthed…
4 years ago
AUTH…
Add unidic co…
3 years ago
CHAN…
Update chang…
2 years ago
CHAN…
Update chang…
3 years ago
CONT…
docs: update …
5 years ago
COPYI…
Update Manif…
6 years ago
MANIF…
Add unidic for…
3 years ago
READ…
README: fix …
2 years ago
SECU…
Bump to 2.3.x
3 years ago
pyproj…
Move covera…
4 years ago
setup.cfg
setup: Update…
3 years ago
setup.py
PEP8/black: r…
3 years ago
tox.ini
CI: drop py36…
3 years ago
About
Lightweight converter from
Japanese Kana-kanji sentences
into Kana-Roman.
codeberg.org/miurahr/pykakasi
# python # natural-language-processing
# japanese # transliterator
# transliterate-japanese
Readme
GPL-3.0 license
Security policy
Activity
421 stars
5 watching
54 forks
Report repository
Releases 37
Release v2.2.1
Latest
on Jul 10, 2021
+ 36 releases
Sponsor this project
liberapay.com/miurahr
Contributors
12
This repository has been archived by the owner on Jul 22, 2022. It is now read-only.
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
docs
docs passing
passing
Run Tox tests
Run Tox tests
no status
no status
Azure Pipelines
Azure Pipelines
set up now
set up now coverage
coverage
92%
92%
#StandWithUkraine
#StandWithUkraine
pykakasi is a Python Natural Language Processing
(NLP) library to transliterate hiragana, katakana and
kanji (Japanese text) into rōmaji (Latin/Roman
alphabet). It can handle characters in NFC form.
Its algorithms are based on the kakasi library, which is
written in C.
Install (from PyPI): pip install pykakasi
Install (from conda-forge): conda install -c
conda-forge pykakasi
Documentation available on readthedocs
This project has given up GitHub. (See Software
Freedom Conservancy's Give Up GitHub site for details)
You can now find this project at
https://codeberg.org/miurahr/pykakasi instead.
Any use of this project's code by GitHub Copilot, past or
present, is done without our permission. We do not
consent to GitHub's use of this project's code in Copilot.
Join us; you can Give Up GitHub too!
Languages
Python 100.0%
Pykakasi
Overview
Give Up GitHub
README
GPL-3.0 license
|
========
Pykakasi
========
Overview
========
.. image:: https://readthedocs.org/projects/pykakasi/badge/?version=latest
:target: https://pykakasi.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://badge.fury.io/py/pykakasi.png
:target: http://badge.fury.io/py/Pykakasi
:alt: PyPI version
.. image:: https://github.com/miurahr/pykakasi/workflows/Run%20Tox%20tests/badge.svg
:target: https://github.com/miurahr/pykakasi/actions?query=workflow%3A%22Run+Tox+tests%22
:alt: Run Tox tests
.. image:: https://dev.azure.com/miurahr/github/_apis/build/status/miurahr.pykakasi?branchName=master
:target: https://dev.azure.com/miurahr/github/_build?definitionId=13&branchName=master
:alt: Azure-Pipelines
.. image:: https://coveralls.io/repos/miurahr/pykakasi/badge.svg?branch=master
:target: https://coveralls.io/r/miurahr/pykakasi?branch=master
:alt: Coverage status
.. image:: https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/badges/StandWithUkraine.svg
:target: https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md
``pykakasi`` is a Python Natural Language Processing (NLP) library to transliterate *hiragana*, *katakana* and *kanji* (Japanese text) into *rōmaji* (Latin/Roman alphabet). It can handle characters in NFC form.
Its algorithms are based on the `kakasi`_ library, which is written in C.
* Install (from `PyPI`_): ``pip install pykakasi``
* Install (from `conda-forge`_): ``conda install -c conda-forge pykakasi``
* `Documentation available on readthedocs`_
.. _`PyPI`: https://pypi.org/project/pykakasi/
.. _`conda-forge`: https://github.com/conda-forge/pykakasi-feedstock
.. _`kakasi`: http://kakasi.namazu.org/
.. _`Documentation available on readthedocs`: https://pykakasi.readthedocs.io/en/latest/index.html
Give Up GitHub
--------------
This project has given up GitHub. (See Software Freedom Conservancy's `Give Up GitHub`_ site for details)
You can now find this project at https://codeberg.org/miurahr/pykakasi instead.
Any use of this project's code by GitHub Copilot, past or present, is done without our permission. We do not consent to GitHub's use of this project's code in Copilot.
Join us; you can `Give Up GitHub`_ too!
.. _`Give Up GitHub`: https://GiveUpGitHub.org
.. image:: https://sfconservancy.org/img/GiveUpGitHub.png
:width: 400
:alt: Logo of the GiveUpGitHub campaign
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/yohokuno/jsc
|
2012-08-23T23:39:40Z
|
Joint source channel model for Japanese Kana Kanji conversion, Chinese pinyin input and CJE mixed input.
|
yohokuno / jsc
Public
Branches
Tags
Go to file
Go to file
Code
data
src
tools
README.md
TODO
waf
wscript
JSC is an implementation of joint source channel or joint n-gram model with monotonic decoder.
It can be used for machine transliteration, Japanese kana-kanji conversion, Chinese pinyin input, English word segmentation or pronunciation
inference.
JSC requires Unix, gcc, python, and marisa-trie. If you want to use RPC server, you also need libevent.
To install JSC, type these commands into your console.
jsc-decode command convert source string into target string via joint source channel model. You can provide queries through standard input
line by line.
About
Joint source channel model for
Japanese Kana Kanji conversion,
Chinese pinyin input and CJE
mixed input.
Readme
Activity
14 stars
6 watching
2 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
C++ 78.6%
C 18.2%
Python 3.2%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
JSC: Joint Souce Channel Model and Decoder
A Joint Source-Channel Model for Machine Transliteration, Li Haizhou, Zhang Min, Su Jian.
http://acl.ldc.upenn.edu/acl2004/main/pdf/121_pdf_2-col.pdf
Requirement
marisa-trie 0.2.0 or later
http://code.google.com/p/marisa-trie/
libevent 2.0 or later
http://libevent.org/
Install
$ ./waf configure [--prefix=INSTALL_DIRECTORY]
$ ./waf build
$ sudo ./waf install
Usage
jsc-decode
README
jsc-build command build model files in binary format from n-gram file in text format.
jsc-server command provides RPC server via simple TCP protocol. You can provide queries through telnet command line by line.
For Japanese Kana Kanji conversion, a model is provided at data/japanese directory. By default, both romaji and hiragana input are allowed.
For Japanese pronunciation inference, a model is provided at data/japanese-reverse directory.
For Chinese Pinyin input, a model is provided at data/chinese/ directory.
For Chinese Hanzi-to-Pinyin Conversion, a model is provided at data/chinese-reverse/ directory.
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-l: turn off sentence-beginning/ending label
jsc-build
options:
-d directory: specify data directory or prefix (default: ./)
-m model: specify model file name (default: ngram)
-t trie_num: specify trie number in marisa-trie (default: 3)
-r: build reverse model
jsc-server
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-p port: specify port number (default: 40714)
-l: turn off sentence-beginning/ending label
Sample Applications
Japanese Kana Kanji Conversion
$ ./build/jsc-decode -d data/japanese/
わたしのなまえはなかのです。
わたし の 名前 は 中野 です 。
arayurugenjitsuwosubetejibunnnohouhenejimagetanoda
あらゆる 現実 を 全て 自分 の ほう へ ネジ 曲げ た の だ
Japanese Pronunciation Inference
$ ./build/jsc-decode -d data/japanese-reverse/
魔理沙は大変なものを盗んでいきました
ま りさ は たいへん な もの を ぬす ん で い き ま し た
Chinese Pinyin Input
$ ./build/jsc-decode -d data/chinese/
woaiziranyuyanchuli
我 爱 自然 语言 处理
zhejianshitagegehaibuzhidaone
这 件 事 她 哥哥 海部 知道 呢
Chinese Hanzi-to-Pinyin Conversion
$ ./build/jsc-decode -d data/chinese-reverse/
汉字拼音转换
hanzi pinyin zhuanhuan
For English input, a model is provided at data/english/ directory.
For English/Japanese/Chinese mixed input, a model is provided at data/mixed/ directory. The language is detected automatically.
Top directory contains these files and directories:
N-gram file should be SRILM format.
Target string and source string should be coupled with character '/'; e.g. "私/わたし"
Language
F-score
Size
Japanese
0.937
10MB
Chinese
0.895
9MB
English
not ready
9MB
Mixed
not ready
27MB
Please refer this paper if you need.
English word segmentation / automatic capitalization
$ ./build/jsc-decode -d data/english/
alicewasbeginningtogetverytiredofsittingbyhersisteronthebank
Alice was beginning to get very tired of sitting by her sister on the bank
istandheretodayhumbledbythetaskbeforeusgratefulforthetrustyouhavebestowedmindfulofthesacrificesbornebyourancestors
I Stand here today humbled by the task before us grateful for the trust you have bestowed mindful of the
sacrifices borne by our ancestors
Mixed Input
$ ./build/jsc-decode -d data/mixed/
thisisapencil
This is a pencil
kyouhayoitenkidesune
今日 は 良い 天気 です ね
woshizhongguoren
我 是 中国 人
thisistotemohaochi!
This is とても 好吃 !
Data Structure
Directories
README.md this file
build/ built by waf automatically
data/ model files
src/ source and header files for C++
tools/ command tools by C++
waf waf build script
wscript waf settings
File format
http://www.speech.sri.com/projects/srilm/
Reference
Accuracy
Paper
Yoh Okuno and Shinsuke Mori, An Ensemble Model of Word-based and Character-based Models for Japanese and Chinese
Input Method, Workshop on Advances in Text Input Methods, 2012.
|
JSC: Joint Souce Channel Model and Decoder
===
**JSC** is an implementation of joint source channel or joint n-gram model with monotonic decoder.
A Joint Source-Channel Model for Machine Transliteration, Li Haizhou, Zhang Min, Su Jian.
http://acl.ldc.upenn.edu/acl2004/main/pdf/121_pdf_2-col.pdf
It can be used for machine transliteration, Japanese kana-kanji conversion, Chinese pinyin input, English word segmentation or pronunciation inference.
Requirement
---
JSC requires Unix, gcc, python, and marisa-trie. If you want to use RPC server, you also need libevent.
marisa-trie 0.2.0 or later
http://code.google.com/p/marisa-trie/
libevent 2.0 or later
http://libevent.org/
Install
---
To install JSC, type these commands into your console.
$ ./waf configure [--prefix=INSTALL_DIRECTORY]
$ ./waf build
$ sudo ./waf install
Usage
---
### jsc-decode
**jsc-decode** command convert source string into target string via joint source channel model.
You can provide queries through standard input line by line.
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-l: turn off sentence-beginning/ending label
### jsc-build
**jsc-build** command build model files in binary format from n-gram file in text format.
options:
-d directory: specify data directory or prefix (default: ./)
-m model: specify model file name (default: ngram)
-t trie_num: specify trie number in marisa-trie (default: 3)
-r: build reverse model
### jsc-server
**jsc-server** command provides RPC server via simple TCP protocol.
You can provide queries through telnet command line by line.
options:
-d directory: specify data directory or prefix (default: ./)
-f format: specify format (segment [default], plain, debug)
-t table: specify table [romaji] mode (both [default], on, off)
-p port: specify port number (default: 40714)
-l: turn off sentence-beginning/ending label
Sample Applications
---
### Japanese Kana Kanji Conversion
For Japanese Kana Kanji conversion, a model is provided at data/japanese directory. By default, both romaji and hiragana input are allowed.
$ ./build/jsc-decode -d data/japanese/
わたしのなまえはなかのです。
わたし の 名前 は 中野 です 。
arayurugenjitsuwosubetejibunnnohouhenejimagetanoda
あらゆる 現実 を 全て 自分 の ほう へ ネジ 曲げ た の だ
### Japanese Pronunciation Inference
For Japanese pronunciation inference, a model is provided at data/japanese-reverse directory.
$ ./build/jsc-decode -d data/japanese-reverse/
魔理沙は大変なものを盗んでいきました
ま りさ は たいへん な もの を ぬす ん で い き ま し た
### Chinese Pinyin Input
For Chinese Pinyin input, a model is provided at data/chinese/ directory.
$ ./build/jsc-decode -d data/chinese/
woaiziranyuyanchuli
我 爱 自然 语言 处理
zhejianshitagegehaibuzhidaone
这 件 事 她 哥哥 海部 知道 呢
### Chinese Hanzi-to-Pinyin Conversion
For Chinese Hanzi-to-Pinyin Conversion, a model is provided at data/chinese-reverse/ directory.
$ ./build/jsc-decode -d data/chinese-reverse/
汉字拼音转换
hanzi pinyin zhuanhuan
### English word segmentation / automatic capitalization
For English input, a model is provided at data/english/ directory.
$ ./build/jsc-decode -d data/english/
alicewasbeginningtogetverytiredofsittingbyhersisteronthebank
Alice was beginning to get very tired of sitting by her sister on the bank
istandheretodayhumbledbythetaskbeforeusgratefulforthetrustyouhavebestowedmindfulofthesacrificesbornebyourancestors
I Stand here today humbled by the task before us grateful for the trust you have bestowed mindful of the sacrifices borne by our ancestors
### Mixed Input
For English/Japanese/Chinese mixed input, a model is provided at data/mixed/ directory. The language is detected automatically.
$ ./build/jsc-decode -d data/mixed/
thisisapencil
This is a pencil
kyouhayoitenkidesune
今日 は 良い 天気 です ね
woshizhongguoren
我 是 中国 人
thisistotemohaochi!
This is とても 好吃 !
Data Structure
---
### Directories
Top directory contains these files and directories:
README.md this file
build/ built by waf automatically
data/ model files
src/ source and header files for C++
tools/ command tools by C++
waf waf build script
wscript waf settings
### File format
N-gram file should be SRILM format.
http://www.speech.sri.com/projects/srilm/
Target string and source string should be coupled with character '/'; e.g. "私/わたし"
Reference
---
### Accuracy
<table>
<tr><th>Language</th><th>F-score</th><th>Size</th></tr>
<tr><td>Japanese</td><td>0.937</td><td>10MB</td></tr>
<tr><td>Chinese</td><td>0.895</td><td>9MB</td></tr>
<tr><td>English</td><td>not ready</td><td>9MB</td></tr>
<tr><td>Mixed</td><td>not ready</td><td>27MB</td></tr>
</table>
### Paper
Please refer this paper if you need.
Yoh Okuno and Shinsuke Mori, An Ensemble Model of Word-based and Character-based Models for Japanese and Chinese Input Method, Workshop on Advances in Text Input Methods, 2012.
http://yoh.okuno.name/pdf/wtim2012.pdf
|
[
"Language Models",
"Multilinguality",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/lovell/hepburn
|
2013-06-28T10:06:51Z
|
Node.js module for converting Japanese Hiragana and Katakana script to, and from, Romaji using Hepburn romanisation
|
lovell / hepburn
Public
Branches
Tags
Go to file
Go to file
Code
.github…
lib
tests
.gitignore
LICEN…
READ…
packag…
Node.js module for converting Japanese Hiragana and Katakana
script to, and from, Romaji using Hepburn romanisation.
Based partly on Takaaki Komura's kana2hepburn.
About
Node.js module for converting
Japanese Hiragana and Katakana
script to, and from, Romaji using
Hepburn romanisation
# nodejs # javascript # katakana # hiragana
# romaji # hepburn # hepburn-romanisation
Readme
Apache-2.0 license
Activity
130 stars
4 watching
23 forks
Report repository
Releases
16 tags
Packages
No packages published
Contributors
10
Languages
JavaScript 100.0%
Code
Issues
4
Pull requests
Actions
Security
Insights
Hepburn
Install
npm install hepburn
Usage
var hepburn = require("hepburn");
fromKana(string)
README
Apache-2.0 license
Converts a string containing Kana, either Hiragana or Katakana,
to Romaji.
In this example romaji1 will have the value HIRAGANA ,
romaji2 will have the value KATAKANA .
Converts a string containing Romaji to Hiragana.
In this example hiragana will have the value ひらがな.
Converts a string containing Romaji to Katakana.
In this example katakana will have the value カタカナ and
tokyo will have the value トーキョー.
Cleans up a romaji string, changing old romaji forms into the
more-modern Hepburn form (for further processing). Generally
matches the style used by Wapro romaji. A larger guide to
modern romaji conventions was used in building this method.
What this methods fixes:
Incorrect usage of the letter M. For example "Shumman"
should be written as "Shunman".
Changing usage of NN into N', for example "Shunnei"
becomes "Shun'ei".
Converting the usage of OU and OH (to indicate a long
vowel) into OO.
var romaji1 = hepburn.fromKana("ひらがな");
var romaji2 = hepburn.fromKana("カタカナ");
toHiragana(string)
var hiragana = hepburn.toHiragana("HIRAGANA");
toKatakana(string)
var katakana = hepburn.toKatakana("KATAKANA");
var tokyo = hepburn.toKatakana("TŌKYŌ");
cleanRomaji(string)
var cleaned = hepburn.cleanRomaji("SYUNNEI");
// cleaned === "SHUN'EI"
Correct old usages Nihon-shiki romanization into Hepburn
form. A full list of the conversions can be found in the
hepburn.js file. For example "Eisyosai" becomes
"Eishosai" and "Yoshihuji" becomes "Yoshifuji".
Splits a string containing Katakana or Hiragana into a syllables
array.
In this example hiragana will have the value ["ひ", "ら",
"が", "な"] and tokyo will have the value ["トー", "キョ
ー"] .
Splits a string containing Romaji into a syllables array.
In this example tokyo will have the value ["TŌ", "KYŌ"] and
pakkingu will have the value ["PAK", "KI", "N", "GU"] .
Returns true if string contains Hiragana.
Returns true if string contains Katakana.
Returns true if string contains any Kana.
Returns true if string contains any Kanji.
Run the unit tests with:
splitKana(string)
var hiragana = hepburn.splitKana("ひらがな");
var tokyo = hepburn.splitKana("トーキョー");
splitRomaji(string)
var tokyo = hepburn.splitRomaji("TŌKYŌ");
var pakkingu = hepburn.splitRomaji("PAKKINGU");
containsHiragana(string)
containsKatakana(string)
containsKana(string)
containsKanji(string)
Testing Build Status
Copyright 2013, 2014, 2015, 2018, 2020 Lovell Fuller and
contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under
npm test
Licence
|
# Hepburn
Node.js module for converting Japanese Hiragana and Katakana script to, and from, Romaji using [Hepburn romanisation](http://en.wikipedia.org/wiki/Hepburn_romanization).
Based partly on Takaaki Komura's [kana2hepburn](https://github.com/emon/kana2hepburn).
## Install
npm install hepburn
## Usage
```javascript
var hepburn = require("hepburn");
```
### fromKana(string)
```javascript
var romaji1 = hepburn.fromKana("ひらがな");
var romaji2 = hepburn.fromKana("カタカナ");
```
Converts a string containing Kana, either Hiragana or Katakana, to Romaji.
In this example `romaji1` will have the value `HIRAGANA`, `romaji2` will have the value `KATAKANA`.
### toHiragana(string)
```javascript
var hiragana = hepburn.toHiragana("HIRAGANA");
```
Converts a string containing Romaji to Hiragana.
In this example `hiragana` will have the value `ひらがな`.
### toKatakana(string)
```javascript
var katakana = hepburn.toKatakana("KATAKANA");
var tokyo = hepburn.toKatakana("TŌKYŌ");
```
Converts a string containing Romaji to Katakana.
In this example `katakana` will have the value `カタカナ` and `tokyo` will have the value `トーキョー`.
### cleanRomaji(string)
```javascript
var cleaned = hepburn.cleanRomaji("SYUNNEI");
// cleaned === "SHUN'EI"
```
Cleans up a romaji string, changing old romaji forms into the more-modern
Hepburn form (for further processing). Generally matches the style used by
[Wapro romaji](https://en.wikipedia.org/wiki/W%C4%81puro_r%C5%8Dmaji).
A larger [guide to modern romaji conventions](https://www.nayuki.io/page/variations-on-japanese-romanization)
was used in building this method.
What this methods fixes:
* Incorrect usage of the letter M. For example "Shumman" should be written as "Shunman".
* Changing usage of NN into N', for example "Shunnei" becomes "Shun'ei".
* Converting the usage of OU and OH (to indicate a long vowel) into OO.
* Correct old usages [Nihon-shiki romanization](https://en.wikipedia.org/wiki/Nihon-shiki_romanization) into Hepburn form. A full list of the conversions can be found in the `hepburn.js` file. For example "Eisyosai" becomes "Eishosai" and "Yoshihuji" becomes "Yoshifuji".
### splitKana(string)
```javascript
var hiragana = hepburn.splitKana("ひらがな");
var tokyo = hepburn.splitKana("トーキョー");
```
Splits a string containing Katakana or Hiragana into a syllables array.
In this example `hiragana` will have the value `["ひ", "ら", "が", "な"]` and `tokyo` will have the value `["トー", "キョー"]`.
### splitRomaji(string)
```javascript
var tokyo = hepburn.splitRomaji("TŌKYŌ");
var pakkingu = hepburn.splitRomaji("PAKKINGU");
```
Splits a string containing Romaji into a syllables array.
In this example `tokyo` will have the value `["TŌ", "KYŌ"]` and `pakkingu` will have the value `["PAK", "KI", "N", "GU"]`.
### containsHiragana(string)
Returns `true` if `string` contains Hiragana.
### containsKatakana(string)
Returns `true` if `string` contains Katakana.
### containsKana(string)
Returns `true` if `string` contains any Kana.
### containsKanji(string)
Returns `true` if `string` contains any Kanji.
## Testing [](https://travis-ci.org/lovell/hepburn)
Run the unit tests with:
npm test
## Licence
Copyright 2013, 2014, 2015, 2018, 2020 Lovell Fuller and contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0.html)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
[
"Phonology",
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/jeresig/node-romaji-name
|
2013-08-24T17:50:11Z
|
Normalize and fix common issues with Romaji-based Japanese names.
|
jeresig / node-romaji-name
Public
Branches
Tags
Go to file
Go to file
Code
.gitignore
LICEN…
READ…
packag…
packag…
romaji-…
setting…
test-pe…
test.js
This is a utility primarily designed for consuming, parsing, and correcting Japanese
names written in rōmaji using proper Hepburn romanization form.
Beyond fixing common problems with Japanese names written with rōmaji, it's also
able to do a number of amazing things:
1. It's able to figure out which part of the name is the surname and which is the
given name and correct the order, if need be (using the enamdict module).
2. It's able to fix names that are missing important punctuation or stress marks
(such as missing long vowel marks, like ō, or ' for splitting confusing n-vowel
usage).
3. It's able to detect non-Japanese names and leave them intact for future
processing.
About
Normalize and fix common issues
with Romaji-based Japanese
names.
www.npmjs.org/package/romaji-…
Readme
MIT license
Activity
41 stars
2 watching
5 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
JavaScript 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
romaji-name
README
MIT license
4. It's able to provide the kana form of the Japanese name (using Hiragana and
the hepburn module).
5. It's able to correctly split Japanese names, written with Kanji, into their proper
given and surnames.
6. It can detect and properly handle the "generation" portion of the name, both in
English and in Japanese (e.g. III, IV, etc.).
This utility was created to help consume all of the (extremely-poorly-written)
Japanese names found when collecting data for the Ukiyo-e Database and Search
Engine.
All code is written by John Resig and is released under an MIT license.
If you like this module this you may also be interested in two other modules that
this module depends upon: enamdict and hepburn.
Which will log out objects that looks something like this:
Example
var romajiName = require("romaji-name");
// Wait for the module to completely load
// (loads the ENAMDICT dictionary)
romajiName.init(function() {
console.log(romajiName.parseName("Kenichi Nakamura"));
console.log(romajiName.parseName("Gakuryo Nakamura"));
console.log(romajiName.parseName("Charles Bartlett"));
});
// Note the correction of the order of the given/surname
// Also note the correct kana generated and the injection
// of the missing '
{
original: 'Kenichi Nakamura',
locale: 'ja',
given: 'Ken\'ichi',
given_kana: 'けんいち',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Ken\'ichi',
ascii: 'Nakamura Ken\'ichi',
plain: 'Nakamura Ken\'ichi',
kana: 'なかむらけんいち'
}
// Note the correction of the order of the given/surname
// Also note the correction of the missing ō
{
original: 'Gakuryo Nakamura',
locale: 'ja',
given: 'Gakuryō',
This is available as a node module on npm. You can find it here:
https://npmjs.org/package/romaji-name It can be installed by running the following:
This library provides a large number of utility methods for working with names
(especially Japanese names). That being said you'll probably only ever make use
of just the few main methods:
Loads the dependent modules (namely, loads the enamdict name database). If,
for some reason, you don't need to do any surname/given name correction, or
correction of stress marks, then you can skip this step (this would likely be a very
abnormal usage of this library).
Parses a single string name and returns an object representing that name.
Optionally you can specify some settings to modify how the name is parsed, see
below for a list of all the settings.
The returned object will have some, or all, of the following properties:
original : The original string that was passed in to parseName .
given_kana: 'がくりょう',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Gakuryō',
ascii: 'Nakamura Gakuryoo',
plain: 'Nakamura Gakuryo',
kana: 'なかむらがくりょう'
}
// Note that it detects that this is likely not a Japanese name
// (and leaves the locale empty, accordingly)
{
original: 'Charles Bartlett',
locale: '',
given: 'Charles',
surname: 'Bartlett',
name: 'Charles Bartlett',
ascii: 'Charles Bartlett',
plain: 'Charles Bartlett'
}
Installation
npm install romaji-name
Documentation
init(Function)
parseName(String [, Object])
settings : An object holding the settings that were passed in to the
parseName method.
locale : A guess at the locale of the name. Only two values exist: "ja" and
"" . Note that just because "ja" was returned it does not guarantee that the
person is actually Japanese, just that the name looks to be Japanese-like (for
example: Some Chinese names also return "ja" ).
given : A string of the Romaji form of the given name. (Will only exist if a
Romaji form was originally provided.)
given_kana : A string of the Kana form of the given name. (Will only exist if a
Romaji form was originally provided and if the locale is "ja" .)
given_kanji : A string of the Kanji form of the given name. (Will only exist if a
Kanji form was originally provided.)
middle :
surname : A string of the Romaji form of the surname. (Will only exist if a
Romaji form was originally provided.)
surname_kana : A string of the Kana form of the surname. (Will only exist if a
Romaji form was originally provided and if the locale is "ja" .)
surname_kanji : A string of the Kanji form of the surname. (Will only exist if a
Kanji form was originally provided.)
generation : A number representing the generation of the name. For example
"John Smith II" would have a generation of 2 .
name : The full name, in properly-stressed romaji, including the generation.
For example: "Nakamura Gakuryō II" .
ascii : The full name, in ascii text, including the generation. This is a proper
ascii representation of the name (with long vowels converted from "ō" into
"oo", for example). Example: "Nakamura Gakuryoo II" .
plain : The full name, in plain text, including the generation. This is the same
as the name property but with all stress formatting stripped from it. This could
be useful to use in the generation of a URL slug, or some such. It should never
be displayed to an end-user as it will almost always be incorrect. Example:
"Nakamura Gakuryo II" .
kana : The full name, in kana, without the generation. For example: "なかむら
がくりょう".
kanji : The full name, in kanji, including the generation. For example: "戯画堂
芦幸 2世" .
unknown : If the name is a representation of an unknown individual (e.g. it's
the string "Unknown", "Not known", or many of the other variations) then this
property will exist and be true .
attributed : If the name includes a prefix like "Attributed to" then this will be
true .
after : If the name includes some sort of "After" or "In the style of" or similar
prefix then this will be true .
school : If the name includes a prefix like "School of", "Pupil of", or similar
then this will be true .
Settings:
The following are optional settings that change how the name parsing functions.
flipNonJa : Names that don't have a "ja" locale should be flipped ("Smith
John" becomes "John Smith").
stripParens : Removes anything that's wrapped in parentheses. Normally
this is left intact and any extra information is parsed from it.
givenFirst : Assumes that the first name is always the given name.
Same as the normal parseName method but accepts an object that's in the same
form as the object returned from parseName . This is useful as you can take
parseName(Object)
|
romaji-name
================
This is a utility primarily designed for consuming, parsing, and correcting Japanese names written in [rōmaji](https://en.wikipedia.org/wiki/Romanization_of_Japanese) using proper [Hepburn romanization](https://en.wikipedia.org/wiki/Hepburn_romanization) form.
Beyond fixing common problems with Japanese names written with rōmaji, it's also able to do a number of amazing things:
1. It's able to figure out which part of the name is the surname and which is the given name and correct the order, if need be (using the [enamdict](https://npmjs.org/package/enamdict) module).
2. It's able to fix names that are missing important punctuation or stress marks (such as missing long vowel marks, like **ō**, or `'` for splitting confusing n-vowel usage).
3. It's able to detect non-Japanese names and leave them intact for future processing.
4. It's able to provide the kana form of the Japanese name (using [Hiragana](https://en.wikipedia.org/wiki/Hiragana) and the [hepburn](https://npmjs.org/package/hepburn) module).
5. It's able to correctly split Japanese names, written with Kanji, into their proper given and surnames.
6. It can detect and properly handle the "generation" portion of the name, both in English and in Japanese (e.g. III, IV, etc.).
This utility was created to help consume all of the (extremely-poorly-written) Japanese names found when collecting data for the [Ukiyo-e Database and Search Engine](http://ukiyo-e.org/).
All code is written by [John Resig](http://ejohn.org/) and is released under an MIT license.
If you like this module this you may also be interested in two other modules that this module depends upon: [enamdict](https://npmjs.org/package/enamdict) and [hepburn](https://npmjs.org/package/hepburn).
Example
-------
```javascript
var romajiName = require("romaji-name");
// Wait for the module to completely load
// (loads the ENAMDICT dictionary)
romajiName.init(function() {
console.log(romajiName.parseName("Kenichi Nakamura"));
console.log(romajiName.parseName("Gakuryo Nakamura"));
console.log(romajiName.parseName("Charles Bartlett"));
});
```
Which will log out objects that looks something like this:
```javascript
// Note the correction of the order of the given/surname
// Also note the correct kana generated and the injection
// of the missing '
{
original: 'Kenichi Nakamura',
locale: 'ja',
given: 'Ken\'ichi',
given_kana: 'けんいち',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Ken\'ichi',
ascii: 'Nakamura Ken\'ichi',
plain: 'Nakamura Ken\'ichi',
kana: 'なかむらけんいち'
}
// Note the correction of the order of the given/surname
// Also note the correction of the missing ō
{
original: 'Gakuryo Nakamura',
locale: 'ja',
given: 'Gakuryō',
given_kana: 'がくりょう',
surname: 'Nakamura',
surname_kana: 'なかむら',
name: 'Nakamura Gakuryō',
ascii: 'Nakamura Gakuryoo',
plain: 'Nakamura Gakuryo',
kana: 'なかむらがくりょう'
}
// Note that it detects that this is likely not a Japanese name
// (and leaves the locale empty, accordingly)
{
original: 'Charles Bartlett',
locale: '',
given: 'Charles',
surname: 'Bartlett',
name: 'Charles Bartlett',
ascii: 'Charles Bartlett',
plain: 'Charles Bartlett'
}
```
Installation
------------
This is available as a node module on npm. You can find it here: https://npmjs.org/package/romaji-name It can be installed by running the following:
npm install romaji-name
Documentation
-------------
This library provides a large number of utility methods for working with names (especially Japanese names). That being said you'll probably only ever make use of just the few main methods:
### `init(Function)`
Loads the dependent modules (namely, loads the `enamdict` name database). If, for some reason, you don't need to do any surname/given name correction, or correction of stress marks, then you can skip this step (this would likely be a very abnormal usage of this library).
### `parseName(String [, Object])`
Parses a single string name and returns an object representing that name. Optionally you can specify some settings to modify how the name is parsed, see below for a list of all the settings.
The returned object will have some, or all, of the following properties:
* `original`: The original string that was passed in to `parseName`.
* `settings`: An object holding the settings that were passed in to the `parseName` method.
* `locale`: A guess at the locale of the name. Only two values exist: `"ja"` and `""`. Note that just because `"ja"` was returned it does not guarantee that the person is actually Japanese, just that the name looks to be Japanese-like (for example: Some Chinese names also return `"ja"`).
* `given`: A string of the Romaji form of the given name. (Will only exist if a Romaji form was originally provided.)
* `given_kana`: A string of the Kana form of the given name. (Will only exist if a Romaji form was originally provided and if the locale is `"ja"`.)
* `given_kanji`: A string of the Kanji form of the given name. (Will only exist if a Kanji form was originally provided.)
* `middle`:
* `surname`: A string of the Romaji form of the surname. (Will only exist if a Romaji form was originally provided.)
* `surname_kana`: A string of the Kana form of the surname. (Will only exist if a Romaji form was originally provided and if the locale is `"ja"`.)
* `surname_kanji`: A string of the Kanji form of the surname. (Will only exist if a Kanji form was originally provided.)
* `generation`: A number representing the generation of the name. For example "John Smith II" would have a generation of `2`.
* `name`: The full name, in properly-stressed romaji, including the generation. For example: `"Nakamura Gakuryō II"`.
* `ascii`: The full name, in ascii text, including the generation. This is a proper ascii representation of the name (with long vowels converted from "ō" into "oo", for example). Example: `"Nakamura Gakuryoo II"`.
* `plain`: The full name, in plain text, including the generation. This is the same as the `name` property but with all stress formatting stripped from it. This could be useful to use in the generation of a URL slug, or some such. It should never be displayed to an end-user as it will almost always be incorrect. Example: `"Nakamura Gakuryo II"`.
* `kana`: The full name, in kana, without the generation. For example: "なかむらがくりょう".
* `kanji`: The full name, in kanji, including the generation. For example: `"戯画堂芦幸 2世"`.
* `unknown`: If the name is a representation of an unknown individual (e.g. it's the string "Unknown", "Not known", or many of the other variations) then this property will exist and be `true`.
* `attributed`: If the name includes a prefix like "Attributed to" then this will be `true`.
* `after`: If the name includes some sort of "After" or "In the style of" or similar prefix then this will be `true`.
* `school`: If the name includes a prefix like "School of", "Pupil of", or similar then this will be `true`.
**Settings:**
The following are optional settings that change how the name parsing functions.
* `flipNonJa`: Names that don't have a "ja" locale should be flipped ("Smith John" becomes "John Smith").
* `stripParens`: Removes anything that's wrapped in parentheses. Normally this is left intact and any extra information is parsed from it.
* `givenFirst`: Assumes that the first name is always the given name.
### `parseName(Object)`
Same as the normal `parseName` method but accepts an object that's in the same form as the object returned from `parseName`. This is useful as you can take existing `romaji-name`-generated name objects and re-parse them again (to easily upgrade them when new changes are made to the `romaji-name` module).
|
[
"Syntactic Text Processing",
"Text Error Correction",
"Text Normalization"
] |
[] |
true |
https://github.com/WaniKani/WanaKana
|
2013-08-27T19:57:41Z
|
Javascript library for detecting and transliterating Hiragana <--> Katakana <--> Romaji
|
WaniKani / WanaKana
Public
Branches
Tags
Go to file
Go to file
Code
.github
cypress
gh-pages
scripts
src
test
.browserslis…
.editorconfig
.eslintrc
.gitignore
.prettierrc
.travis.yml
CHANGEL…
CONTRIBU…
LICENSE
README.md
VERSION
babel.confi…
cypress.json
jsdoc.json
package.json
rollup.confi…
tsconfig.json
yarn.lock
About
Javascript library for detecting and
transforming between Hiragana,
Katakana, and Romaji
wanakana.com
Readme
MIT license
Activity
Custom properties
765 stars
14 watching
76 forks
Report repository
Releases 24
5.3.1
Latest
on Nov 20, 2023
+ 23 releases
Packages
No packages published
Used by 1k
+ 1,002
Contributors
17
+ 3 contributors
Languages
JavaScript 78.7%
HTML 8.7%
CSS 5.2%
SCSS 4.3%
Code
Issues
14
Pull requests
5
Actions
Projects
Wiki
Security
Insights
npm
npm v5.3.1
v5.3.1
Build Status coverage
coverage 99%
99% cypress
cypress dashboard
dashboard
Visit the website to see WanaKana in action.
https://unpkg.com/wanakana
TypeScript 3.1%
ワナカナ <--> WanaKana <--> わなかな
Javascript utility library for detecting and transliterating Hiragana, Katakana, and Romaji
Demo
Usage
In the browser without a build step, use the minified (UMD) bundle (with browser
polyfills)
<head>
<meta charset="UTF-8">
<script src="https://unpkg.com/wanakana"></script>
</head>
<body>
<input type="text" id="wanakana-input" />
<script>
var textInput = document.getElementById('wanakana-input');
wanakana.bind(textInput, /* options */); // uses IMEMode with toKana() as default
// to remove event listeners: wanakana.unbind(textInput);
</script>
</body>
ES Modules or Node
Install
npm install wanakana
ES Modules
import * as wanakana from 'wanakana';
// or
import { toKana, isRomaji } from 'wanakana';
Node (>=12 supported)
const wanakana = require('wanakana');
README
MIT license
Extended API reference
Documentation
Quick Reference
/*** DOM HELPERS ***/
// Automatically converts text using an eventListener on input
// Sets option: { IMEMode: true } with toKana() as converter by default
wanakana.bind(domElement [, options]);
// Removes event listener
wanakana.unbind(domElement);
/*** TEXT CHECKING UTILITIES ***/
wanakana.isJapanese('泣き虫。!〜2¥zenkaku')
// => true
wanakana.isKana('あーア')
// => true
wanakana.isHiragana('すげー')
// => true
wanakana.isKatakana('ゲーム')
// => true
wanakana.isKanji('切腹')
// => true
wanakana.isKanji('勢い')
// => false
wanakana.isRomaji('Tōkyō and Ōsaka')
// => true
wanakana.toKana('ONAJI buttsuuji')
// => 'オナジ ぶっつうじ'
wanakana.toKana('座禅‘zazen’スタイル')
// => '座禅「ざぜん」スタイル'
wanakana.toKana('batsuge-mu')
// => 'ばつげーむ'
wanakana.toKana('wanakana', { customKanaMapping: { na: 'に', ka: 'bana' }) });
// => 'わにbanaに'
wanakana.toHiragana('toukyou, オオサカ')
// => 'とうきょう、 おおさか'
wanakana.toHiragana('only カナ', { passRomaji: true })
// => 'only かな'
wanakana.toHiragana('wi', { useObsoleteKana: true })
// => 'ゐ'
wanakana.toKatakana('toukyou, おおさか')
// => 'トウキョウ、 オオサカ'
wanakana.toKatakana('only かな', { passRomaji: true })
// => 'only カナ'
wanakana.toKatakana('wi', { useObsoleteKana: true })
// => 'ヰ'
wanakana.toRomaji('ひらがな カタカナ')
// => 'hiragana katakana'
Only the browser build via unpkg or the root wanakana.min.js includes polyfills for older browsers.
Please see CONTRIBUTING.md
Mims H. Wright – Author
Duncan Bay – Author
Geggles – Contributor
James McNamee – Contributor
Project sponsored by Tofugu & WaniKani
The following ports have been created by the community:
Python (Starwort/wanakana-py) on PyPI as wanakana-python
Java (MasterKale/WanaKanaJava)
Rust (PSeitz/wana_kana_rust)
Swift (profburke/WanaKanaSwift)
Kotlin (esnaultdev/wanakana-kt)
wanakana.toRomaji('ひらがな カタカナ', { upcaseKatakana: true })
// => 'hiragana KATAKANA'
wanakana.toRomaji('つじぎり', { customRomajiMapping: { じ: 'zi', つ: 'tu', り: 'li' }) };
// => 'tuzigili'
/*** EXTRA UTILITIES ***/
wanakana.stripOkurigana('お祝い')
// => 'お祝'
wanakana.stripOkurigana('踏み込む')
// => '踏み込'
wanakana.stripOkurigana('お腹', { leading: true });
// => '腹'
wanakana.stripOkurigana('ふみこむ', { matchKanji: '踏み込む' });
// => 'ふみこ'
wanakana.stripOkurigana('おみまい', { matchKanji: 'お祝い', leading: true });
// => 'みまい'
wanakana.tokenize('ふふフフ')
// => ['ふふ', 'フフ']
wanakana.tokenize('hello 田中さん')
// => ['hello', ' ', '田中', 'さん']
wanakana.tokenize('I said 私はすごく悲しい', { compact: true })
// => [ 'I said ', '私はすごく悲しい']
Important
Contributing
Contributors
Credits
Ports
C# (kmoroz/WanaKanaShaapu)
Go (deelawn/wanakana)
|
<div align="center">
<!-- Npm Version -->
<a href="https://www.npmjs.com/package/wanakana">
<img src="https://img.shields.io/npm/v/wanakana.svg" alt="NPM package" />
</a>
<!-- Build Status -->
<a href="https://travis-ci.org/WaniKani/WanaKana">
<img src="https://img.shields.io/travis/WaniKani/WanaKana.svg" alt="Build Status" />
</a>
<!-- Test Coverage -->
<a href="https://coveralls.io/github/WaniKani/WanaKana">
<img src="https://img.shields.io/coveralls/WaniKani/WanaKana.svg" alt="Test Coverage" />
</a>
<a href="https://dashboard.cypress.io/#/projects/tmdhov/runs">
<img src="https://img.shields.io/badge/cypress-dashboard-brightgreen.svg" alt="Cypress Dashboard" />
</a>
</div>
<div align="center">
<h1>ワナカナ <--> WanaKana <--> わなかな</h1>
<h4>Javascript utility library for detecting and transliterating Hiragana, Katakana, and Romaji</h4>
</div>
## Demo
Visit the [website](http://www.wanakana.com) to see WanaKana in action.
## Usage
### In the browser without a build step, use the minified (UMD) bundle (with browser polyfills)
[https://unpkg.com/wanakana](https://unpkg.com/wanakana)
```html
<head>
<meta charset="UTF-8">
<script src="https://unpkg.com/wanakana"></script>
</head>
<body>
<input type="text" id="wanakana-input" />
<script>
var textInput = document.getElementById('wanakana-input');
wanakana.bind(textInput, /* options */); // uses IMEMode with toKana() as default
// to remove event listeners: wanakana.unbind(textInput);
</script>
</body>
```
### ES Modules or Node
#### Install
```shell
npm install wanakana
```
#### ES Modules
```javascript
import * as wanakana from 'wanakana';
// or
import { toKana, isRomaji } from 'wanakana';
```
#### Node (>=12 supported)
```javascript
const wanakana = require('wanakana');
```
## Documentation
[Extended API reference](http://www.wanakana.com/docs/global.html)
## Quick Reference
```javascript
/*** DOM HELPERS ***/
// Automatically converts text using an eventListener on input
// Sets option: { IMEMode: true } with toKana() as converter by default
wanakana.bind(domElement [, options]);
// Removes event listener
wanakana.unbind(domElement);
/*** TEXT CHECKING UTILITIES ***/
wanakana.isJapanese('泣き虫。!〜2¥zenkaku')
// => true
wanakana.isKana('あーア')
// => true
wanakana.isHiragana('すげー')
// => true
wanakana.isKatakana('ゲーム')
// => true
wanakana.isKanji('切腹')
// => true
wanakana.isKanji('勢い')
// => false
wanakana.isRomaji('Tōkyō and Ōsaka')
// => true
wanakana.toKana('ONAJI buttsuuji')
// => 'オナジ ぶっつうじ'
wanakana.toKana('座禅‘zazen’スタイル')
// => '座禅「ざぜん」スタイル'
wanakana.toKana('batsuge-mu')
// => 'ばつげーむ'
wanakana.toKana('wanakana', { customKanaMapping: { na: 'に', ka: 'bana' }) });
// => 'わにbanaに'
wanakana.toHiragana('toukyou, オオサカ')
// => 'とうきょう、 おおさか'
wanakana.toHiragana('only カナ', { passRomaji: true })
// => 'only かな'
wanakana.toHiragana('wi', { useObsoleteKana: true })
// => 'ゐ'
wanakana.toKatakana('toukyou, おおさか')
// => 'トウキョウ、 オオサカ'
wanakana.toKatakana('only かな', { passRomaji: true })
// => 'only カナ'
wanakana.toKatakana('wi', { useObsoleteKana: true })
// => 'ヰ'
wanakana.toRomaji('ひらがな カタカナ')
// => 'hiragana katakana'
wanakana.toRomaji('ひらがな カタカナ', { upcaseKatakana: true })
// => 'hiragana KATAKANA'
wanakana.toRomaji('つじぎり', { customRomajiMapping: { じ: 'zi', つ: 'tu', り: 'li' }) };
// => 'tuzigili'
/*** EXTRA UTILITIES ***/
wanakana.stripOkurigana('お祝い')
// => 'お祝'
wanakana.stripOkurigana('踏み込む')
// => '踏み込'
wanakana.stripOkurigana('お腹', { leading: true });
// => '腹'
wanakana.stripOkurigana('ふみこむ', { matchKanji: '踏み込む' });
// => 'ふみこ'
wanakana.stripOkurigana('おみまい', { matchKanji: 'お祝い', leading: true });
// => 'みまい'
wanakana.tokenize('ふふフフ')
// => ['ふふ', 'フフ']
wanakana.tokenize('hello 田中さん')
// => ['hello', ' ', '田中', 'さん']
wanakana.tokenize('I said 私はすごく悲しい', { compact: true })
// => [ 'I said ', '私はすごく悲しい']
```
## Important
Only the browser build via unpkg or the root `wanakana.min.js` includes polyfills for older browsers.
## Contributing
Please see [CONTRIBUTING.md](CONTRIBUTING.md)
## Contributors
* [Mims H. Wright](https://github.com/mimshwright) – Author
* [Duncan Bay](https://github.com/DJTB) – Author
* [Geggles](https://github.com/geggles) – Contributor
* [James McNamee](https://github.com/dotfold) – Contributor
## Credits
Project sponsored by [Tofugu](http://www.tofugu.com) & [WaniKani](http://www.wanikani.com)
## Ports
The following ports have been created by the community:
* Python ([Starwort/wanakana-py](https://github.com/Starwort/wanakana-py)) on PyPI as `wanakana-python`
* Java ([MasterKale/WanaKanaJava](https://github.com/MasterKale/WanaKanaJava))
* Rust ([PSeitz/wana_kana_rust](https://github.com/PSeitz/wana_kana_rust))
* Swift ([profburke/WanaKanaSwift](https://github.com/profburke/WanaKanaSwift))
* Kotlin ([esnaultdev/wanakana-kt](https://github.com/esnaultdev/wanakana-kt))
* C# ([kmoroz/WanaKanaShaapu](https://github.com/kmoroz/WanaKanaShaapu))
* Go ([deelawn/wanakana](https://github.com/deelawn/wanakana))
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/gojp/nihongo
|
2013-09-02T15:17:52Z
|
Japanese Dictionary
|
gojp / nihongo
Public
Branches
Tags
Go to file
Go to file
Code
.github…
data
edict2
lib
static
templa…
.gitignore
LICEN…
READ…
go.mod
go.sum
main.go
go report
go report A+
A+
Open source Japanese Dictionary written in Go:
https://nihongo.io
1. git clone
https://github.com/gojp/nihongo.git
About
Japanese Dictionary
# go # golang # dictionary # japanese
# nihongo
Readme
MIT license
Activity
Custom properties
78 stars
6 watching
5 forks
Report repository
Contributors
2
Languages
Go 51.1%
HTML 25.2%
Python 18.6%
CSS 4.6%
Shell 0.5%
Code
Issues
7
Pull requests
Actions
Security
Insights
nihongo.io
How to run:
README
MIT license
2. Run the app: go run main.go
|
nihongo.io
=========
[](https://goreportcard.com/report/github.com/gojp/nihongo)
Open source Japanese Dictionary written in Go: [https://nihongo.io](https://nihongo.io)
### How to run:
1. `git clone https://github.com/gojp/nihongo.git`
2. Run the app: `go run main.go`
|
[] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/studio-ousia/mojimoji
|
2013-11-02T16:23:06Z
|
A fast converter between Japanese hankaku and zenkaku characters
|
studio-ousia / mojimoji
Public
Branches
Tags
Go to file
Go to file
Code
.github…
mojimoji
CONT…
LICEN…
MANIF…
READ…
mojimo…
pyproj…
require…
setup.cfg
setup.py
test_m…
Test
Test
passing
passing pypi
pypi v0.0.13
v0.0.13 pip downloads
pip downloads 7M
7M
A Cython-based fast converter between Japanese hankaku and zenkaku characters.
About
A fast converter between Japanese
hankaku and zenkaku characters
Readme
View license
Activity
Custom properties
144 stars
8 watching
21 forks
Report repository
Releases 3
0.0.13
Latest
on Jan 12
+ 2 releases
Packages
No packages published
Contributors
6
Languages
Cython 81.3%
Python 18.7%
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
mojimoji
Installation
$ pip install mojimoji
README
License
mojimoji: 0.0.1
zenhan: 0.4
unicodedata: Bundled with Python 2.7.3
Examples
Zenkaku to Hankaku
>>> import mojimoji
>>> print(mojimoji.zen_to_han('アイウabc012'))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', ascii=False))
アイウabc012
Hankaku to Zenkaku
>>> import mojimoji
>>> print(mojimoji.han_to_zen('アイウabc012'))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', ascii=False))
アイウabc012
Benchmarks
Library versions
Results
In [19]: s = 'ABCDEFG012345' * 10
In [20]: %time for n in range(1000000): mojimoji.zen_to_han(s)
CPU times: user 2.86 s, sys: 0.10 s, total: 2.97 s
Wall time: 2.88 s
In [21]: %time for n in range(1000000): unicodedata.normalize('NFKC', s)
CPU times: user 5.43 s, sys: 0.12 s, total: 5.55 s
Wall time: 5.44 s
In [22]: %time for n in range(1000000): zenhan.z2h(s)
mojimoji-rs: The Rust implementation of mojimoji
CPU times: user 69.18 s, sys: 0.11 s, total: 69.29 s
Wall time: 69.48 s
Links
|
mojimoji
========
.. image:: https://github.com/studio-ousia/mojimoji/actions/workflows/test.yml/badge.svg
:target: https://github.com/studio-ousia/mojimoji/actions/workflows/test.yml
.. image:: https://img.shields.io/pypi/v/mojimoji.svg
:target: https://pypi.org/project/mojimoji/
.. image:: https://static.pepy.tech/personalized-badge/mojimoji?period=total&units=international_system&left_color=grey&right_color=orange&left_text=pip%20downloads
:target: https://pypi.org/project/mojimoji/
A Cython-based fast converter between Japanese hankaku and zenkaku characters.
Installation
------------
.. code-block:: bash
$ pip install mojimoji
Examples
--------
Zenkaku to Hankaku
^^^^^^^^^^^^^^^^^^
.. code-block:: python
>>> import mojimoji
>>> print(mojimoji.zen_to_han('アイウabc012'))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.zen_to_han('アイウabc012', ascii=False))
アイウabc012
Hankaku to Zenkaku
^^^^^^^^^^^^^^^^^^
.. code-block:: python
>>> import mojimoji
>>> print(mojimoji.han_to_zen('アイウabc012'))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', kana=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', digit=False))
アイウabc012
>>> print(mojimoji.han_to_zen('アイウabc012', ascii=False))
アイウabc012
Benchmarks
----------
Library versions
^^^^^^^^^^^^^^^^
- mojimoji: 0.0.1
- `zenhan <https://pypi.python.org/pypi/zenhan>`_: 0.4
- `unicodedata <http://docs.python.org/2/library/unicodedata.html>`_: Bundled with Python 2.7.3
Results
^^^^^^^
.. code-block:: python
In [19]: s = 'ABCDEFG012345' * 10
In [20]: %time for n in range(1000000): mojimoji.zen_to_han(s)
CPU times: user 2.86 s, sys: 0.10 s, total: 2.97 s
Wall time: 2.88 s
In [21]: %time for n in range(1000000): unicodedata.normalize('NFKC', s)
CPU times: user 5.43 s, sys: 0.12 s, total: 5.55 s
Wall time: 5.44 s
In [22]: %time for n in range(1000000): zenhan.z2h(s)
CPU times: user 69.18 s, sys: 0.11 s, total: 69.29 s
Wall time: 69.48 s
Links
-----
- `mojimoji-rs <https://github.com/europeanplaice/mojimoji-rs>`_: The Rust implementation of mojimoji
- `gomojimoji <https://github.com/rusq/gomojimoji>`_: The Go implementation of mojimoji
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/cihai/cihai
|
2013-12-03T17:42:52Z
|
Python library for CJK (Chinese, Japanese, and Korean) language dictionary
|
Access to this site has been restricted.
If you believe this is an error, please contact Support.
GitHub Status — @githubstatus
|
# cihai · [](https://pypi.org/project/cihai/) [](https://github.com/cihai/cihai/blob/master/LICENSE) [](https://codecov.io/gh/cihai/cihai)
Python library for [CJK](https://cihai.git-pull.com/glossary.html#term-cjk) (chinese, japanese,
korean) data.
This project is under active development. Follow our progress and check back for updates!
## Quickstart
### API / Library (this repository)
```console
$ pip install --user cihai
```
```python
from cihai.core import Cihai
c = Cihai()
if not c.unihan.is_bootstrapped: # download and install Unihan to db
c.unihan.bootstrap()
query = c.unihan.lookup_char('好')
glyph = query.first()
print("lookup for 好: %s" % glyph.kDefinition)
# lookup for 好: good, excellent, fine; well
query = c.unihan.reverse_char('good')
print('matches for "good": %s ' % ', '.join([glph.char for glph in query]))
# matches for "good": 㑘, 㑤, 㓛, 㘬, 㙉, 㚃, 㚒, 㚥, 㛦, 㜴, 㜺, 㝖, 㤛, 㦝, ...
```
See [API](https://cihai.git-pull.com/api.html) documentation and
[/examples](https://github.com/cihai/cihai/tree/master/examples).
### CLI ([cihai-cli](https://cihai-cli.git-pull.com))
```console
$ pip install --user cihai-cli
```
Character lookup:
```console
$ cihai info 好
```
```yaml
char: 好
kCantonese: hou2 hou3
kDefinition: good, excellent, fine; well
kHangul: 호
kJapaneseOn: KOU
kKorean: HO
kMandarin: hǎo
kTang: "*xɑ̀u *xɑ̌u"
kTotalStrokes: "6"
kVietnamese: háo
ucn: U+597D
```
Reverse lookup:
```console
$ cihai reverse library
```
```yaml
char: 圕
kCangjie: WLGA
kCantonese: syu1
kCihaiT: '308.302'
kDefinition: library
kMandarin: tú
kTotalStrokes: '13'
ucn: U+5715
--------
```
### UNIHAN data
All datasets that cihai uses have stand-alone tools to export their data. No library required.
- [unihan-etl](https://unihan-etl.git-pull.com) - [UNIHAN](http://unicode.org/charts/unihan.html)
data exports for csv, yaml and json.
## Developing
```console
$ git clone https://github.com/cihai/cihai.git`
```
```console
$ cd cihai/
```
[Bootstrap your environment and learn more about contributing](https://cihai.git-pull.com/contributing/). We use the same conventions / tools across all cihai projects: `pytest`, `sphinx`, `mypy`, `ruff`, `tmuxp`, and file watcher helpers (e.g. `entr(1)`).
## Python versions
- 0.19.0: Last Python 3.7 release
## Quick links
- [Quickstart](https://cihai.git-pull.com/quickstart.html)
- [Datasets](https://cihai.git-pull.com/datasets.html) a full list of current and future data sets
- Python [API](https://cihai.git-pull.com/api.html)
- [Roadmap](https://cihai.git-pull.com/design-and-planning/)
- Python support: >= 3.8, pypy
- Source: <https://github.com/cihai/cihai>
- Docs: <https://cihai.git-pull.com>
- Changelog: <https://cihai.git-pull.com/history.html>
- API: <https://cihai.git-pull.com/api.html>
- Issues: <https://github.com/cihai/cihai/issues>
- Test coverage: <https://codecov.io/gh/cihai/cihai>
- pypi: <https://pypi.python.org/pypi/cihai>
- OpenHub: <https://www.openhub.net/p/cihai>
- License: MIT
[](https://cihai.git-pull.com/)
[](https://github.com/cihai/cihai/actions?query=workflow%3A%22tests%22)
|
[
"Multilinguality",
"Syntactic Text Processing"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/SamuraiT/mecab-python3
|
2014-05-31T08:47:04Z
|
mecab-python. mecab-python. you can find original version here:http://taku910.github.io/mecab/
|
SamuraiT / mecab-python3
Public
Branches
Tags
Go to file
Go to file
Code
.github
debian
src/Me…
test
.gitattri…
.gitignore
AUTH…
BSD
COPYI…
Docker…
GPL
LGPL
MANIF…
READ…
setup.py
tox.ini
test-manylinux
test-manylinux
passing
passing downloads
downloads 745k/month
745k/month
platforms
platforms linux macosx windows
linux macosx windows
About
🐍 mecab-python. you can find
original version
here:http://taku910.github.io/mecab/
pypi.python.org/pypi/mecab-pyt…
Readme
View license
Activity
539 stars
11 watching
51 forks
Report repository
Releases 10
v1.0.7: Extra Debugging…
Latest
on Sep 14, 2023
+ 9 releases
Packages
No packages published
Used by 4.1k
+ 4,068
Contributors
11
Code
Issues
1
Pull requests
Actions
Projects
Wiki
Security
Ins
mecab-python3
README
License
This is a Python wrapper for the MeCab morphological analyzer for Japanese
text. It currently works with Python 3.8 and greater.
Note: If using MacOS Big Sur, you'll need to upgrade pip to version 20.3 or
higher to use wheels due to a pip issue.
issueを英語で書く必要はありません。
Note that Windows wheels require a Microsoft Visual C++ Redistributable, so be
sure to install that.
The API for mecab-python3 closely follows the API for MeCab itself, even when
this makes it not very “Pythonic.” Please consult the official MeCab
documentation for more information.
Binary wheels are available for MacOS X, Linux, and Windows (64bit) are
installed by default when you use pip :
These wheels include a copy of the MeCab library, but not a dictionary. In order
to use MeCab you'll need to install a dictionary. unidic-lite is a good one to
start with:
To build from source using pip,
Languages
C++ 93.2%
Python 5.4%
SWIG 1.3%
Dockerfile 0.1%
Basic usage
>>> import MeCab
>>> wakati = MeCab.Tagger("-Owakati")
>>> wakati.parse("pythonが大好きです").split()
['python', 'が', '大好き', 'です']
>>> tagger = MeCab.Tagger()
>>> print(tagger.parse("pythonが大好きです"))
python python python python 名詞-普通名詞-一般
が ガ ガ が 助詞-格助詞
大好き ダイスキ ダイスキ 大好き 形状詞-一般
です デス デス です 助動詞 助動詞-デス 終止形-一般
EOS
Installation
pip install mecab-python3
pip install unidic-lite
pip install --no-binary :all: mecab-python3
In order to use MeCab, you must install a dictionary. There are many different
dictionaries available for MeCab. These UniDic packages, which include slight
modifications for ease of use, are recommended:
unidic: The latest full UniDic.
unidic-lite: A slightly modified UniDic 2.1.2, chosen for its small size.
The dictionaries below are not recommended due to being unmaintained for
many years, but they are available for use with legacy applications.
ipadic
jumandic
For more details on the differences between dictionaries see here.
If you get a RuntimeError when you try to run MeCab, here are some things to
check:
You have to install this to use this package on Windows.
Run pip install unidic-lite and confirm that works. If that fixes your
problem, you either don't have a dictionary installed, or you need to specify your
dictionary path like this:
Note: on Windows, use nul instead of /dev/null . Alternately, if you have a
mecabrc you can use the path after -r .
If you get this error:
Dictionaries
Common Issues
Windows Redistributable
Installing a Dictionary
tagger = MeCab.Tagger('-r /dev/null -d
/usr/local/lib/mecab/dic/mydic')
Specifying a mecabrc
error message: [ifs] no such file or directory:
You need to specify a mecabrc file. It's OK to specify an empty file, it just has to
exist. You can specify a mecabrc with -r . This may be necessary on Debian or
Ubuntu, where the mecabrc is in /etc/mecabrc .
You can specify an empty mecabrc like this:
Chasen output is not a built-in feature of MeCab, you must specify it in your
dicrc or mecabrc . Notably, Unidic does not include Chasen output format.
Please see the MeCab documentation.
fugashi is a Cython wrapper for MeCab with a Pythonic interface, by the
current maintainer of this library
SudachiPy is a modern tokenizer with an actively maintained dictionary
pymecab-ko is a wrapper of the Korean MeCab fork mecab-ko based on
mecab-python3
KoNLPy is a library for Korean NLP that includes a MeCab wrapper
Like MeCab itself, mecab-python3 is copyrighted free software by Taku Kudo
[email protected] and Nippon Telegraph and Telephone Corporation, and is
distributed under a 3-clause BSD license (see the file BSD ). Alternatively, it may
be redistributed under the terms of the GNU General Public License, version 2
(see the file GPL ) or the GNU Lesser General Public License, version 2.1 (see
the file LGPL ).
/usr/local/etc/mecabrc
tagger = MeCab.Tagger('-r/dev/null -d/home/hoge/mydic')
Using Unsupported Output Modes like -Ochasen
Alternatives
Licensing
|
[](https://pypi.org/project/mecab-python3/)

[](https://pypi.org/project/mecab-python3/)

# mecab-python3
This is a Python wrapper for the [MeCab][] morphological analyzer for Japanese
text. It currently works with Python 3.8 and greater.
**Note:** If using MacOS Big Sur, you'll need to upgrade pip to version 20.3 or
higher to use wheels due to a pip issue.
**issueを英語で書く必要はありません。**
[MeCab]: https://taku910.github.io/mecab/
Note that Windows wheels require a [Microsoft Visual C++
Redistributable][msvc], so be sure to install that.
[msvc]: https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads
# Basic usage
```py
>>> import MeCab
>>> wakati = MeCab.Tagger("-Owakati")
>>> wakati.parse("pythonが大好きです").split()
['python', 'が', '大好き', 'です']
>>> tagger = MeCab.Tagger()
>>> print(tagger.parse("pythonが大好きです"))
python python python python 名詞-普通名詞-一般
が ガ ガ が 助詞-格助詞
大好き ダイスキ ダイスキ 大好き 形状詞-一般
です デス デス です 助動詞 助動詞-デス 終止形-一般
EOS
```
The API for `mecab-python3` closely follows the API for MeCab itself,
even when this makes it not very “Pythonic.” Please consult the [official MeCab
documentation][mecab-docs] for more information.
[mecab-docs]: https://taku910.github.io/mecab/
# Installation
Binary wheels are available for MacOS X, Linux, and Windows (64bit) are
installed by default when you use `pip`:
```sh
pip install mecab-python3
```
These wheels include a copy of the MeCab library, but not a dictionary. In
order to use MeCab you'll need to install a dictionary. `unidic-lite` is a good
one to start with:
```sh
pip install unidic-lite
```
To build from source using pip,
```sh
pip install --no-binary :all: mecab-python3
```
## Dictionaries
In order to use MeCab, you must install a dictionary. There are many different dictionaries available for MeCab. These UniDic packages, which include slight modifications for ease of use, are recommended:
- [unidic](https://github.com/polm/unidic-py): The latest full UniDic.
- [unidic-lite](https://github.com/polm/unidic-lite): A slightly modified UniDic 2.1.2, chosen for its small size.
The dictionaries below are not recommended due to being unmaintained for many years, but they are available for use with legacy applications.
- [ipadic](https://github.com/polm/ipadic-py)
- [jumandic](https://github.com/polm/jumandic-py)
For more details on the differences between dictionaries see [here](https://www.dampfkraft.com/nlp/japanese-tokenizer-dictionaries.html).
# Common Issues
If you get a `RuntimeError` when you try to run MeCab, here are some things to check:
## Windows Redistributable
You have to install [this][msvc] to use this package on Windows.
## Installing a Dictionary
Run `pip install unidic-lite` and confirm that works. If that fixes your
problem, you either don't have a dictionary installed, or you need to specify
your dictionary path like this:
tagger = MeCab.Tagger('-r /dev/null -d /usr/local/lib/mecab/dic/mydic')
Note: on Windows, use `nul` instead of `/dev/null`. Alternately, if you have a
`mecabrc` you can use the path after `-r`.
## Specifying a mecabrc
If you get this error:
error message: [ifs] no such file or directory: /usr/local/etc/mecabrc
You need to specify a `mecabrc` file. It's OK to specify an empty file, it just
has to exist. You can specify a `mecabrc` with `-r`. This may be necessary on
Debian or Ubuntu, where the `mecabrc` is in `/etc/mecabrc`.
You can specify an empty `mecabrc` like this:
tagger = MeCab.Tagger('-r/dev/null -d/home/hoge/mydic')
## Using Unsupported Output Modes like `-Ochasen`
Chasen output is not a built-in feature of MeCab, you must specify it in your
`dicrc` or `mecabrc`. Notably, Unidic does not include Chasen output format.
Please see [the MeCab documentation](https://taku910.github.io/mecab/#format).
# Alternatives
- [fugashi](https://github.com/polm/fugashi) is a Cython wrapper for MeCab with a Pythonic interface, by the current maintainer of this library
- [SudachiPy](https://github.com/WorksApplications/sudachi.rs) is a modern tokenizer with an actively maintained dictionary
- [pymecab-ko](https://github.com/NoUnique/pymecab-ko) is a wrapper of the Korean MeCab fork [mecab-ko](https://bitbucket.org/eunjeon/mecab-ko/src/master/) based on mecab-python3
- [KoNLPy](https://konlpy.org/en/latest/) is a library for Korean NLP that includes a MeCab wrapper
# Licensing
Like MeCab itself, `mecab-python3` is copyrighted free software by
Taku Kudo <[email protected]> and Nippon Telegraph and Telephone Corporation,
and is distributed under a 3-clause BSD license (see the file `BSD`).
Alternatively, it may be redistributed under the terms of the
GNU General Public License, version 2 (see the file `GPL`) or the
GNU Lesser General Public License, version 2.1 (see the file `LGPL`).
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/hakatashi/kyujitai.js
|
2014-09-06T08:05:01Z
|
Utility collections for making Japanese text old-fashioned
|
hakatashi / kyujitai.js
Public
7 Branches
8 Tags
Go to file
Go to file
Code
hakatashi 1.3.0
8b627c7 · 4 years ago
calibrate
Remove BO…
10 years ago
data
Add a charact…
4 years ago
dist
Update dist
10 years ago
lib
Change pack…
9 years ago
test
Change test t…
10 years ago
.gitattri…
Add .gitattribu…
4 years ago
.gitignore
Distribute pac…
10 years ago
.npmig…
Disignore dat…
10 years ago
.travis.…
Update travis …
4 years ago
Gruntfil…
Change moch…
9 years ago
READ…
Modernize re…
4 years ago
packag…
1.3.0
4 years ago
packag…
1.3.0
4 years ago
build
build passing
passing Greenkeeper
Greenkeeper Move to Snyk
Move to Snyk
Utility collections for making Japanese text old-fashioned.
About
Utility collections for making
Japanese text old-fashioned
hakatashi.github.io/kyujitai.js/
# javascript # japanese # text-processing
Readme
Activity
20 stars
4 watching
3 forks
Report repository
Releases 4
v0.2.1
Latest
on Sep 22, 2014
+ 3 releases
Packages
No packages published
Contributors
4
hakatashi Koki Takahashi
greenkeeperio-bot Greenkeeper
greenkeeper[bot]
cmplstofB B̅
Languages
JavaScript 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
kyujitai.js
README
Constructor.
options : [Object]
callback : [Function(error)] Called when construction
completed.
error : [Error] Supplied if construction failed.
Encode string from shinjitai to kyujitai.
string : [String] Input string
options : [Object]
options.IVD : [Boolean] true if you want to allow
IVS for the encoded string. Default is false.
Returns: [String] Output string
Decode string from kyujitai to shinjitai.
string : [String] Input string
install
npm install kyujitai
Use
const Kyujitai = require('kyujitai');
const kyujitai = new Kyujitai((error) => {
kyujitai.encode('旧字体'); //=> '舊字體'
});
Usage
new Kyujitai([options], [callback])
kyujitai.encode(string, [options])
kyujitai.encode('旧字体'); //=> '舊字體'
kyujitai.encode('画期的図画'); //=> '劃期的圖畫'
kyujitai.encode('弁明'); //=> '辯明'
kyujitai.encode('弁償'); //=> '辨償'
kyujitai.encode('花弁'); //=> '花瓣'
kyujitai.encode('弁髪'); //=> '辮髮'
kyujitai.decode(string, [options])
|
# kyujitai.js
[](https://travis-ci.org/hakatashi/kyujitai.js)
[](https://greenkeeper.io/)
Utility collections for making Japanese text old-fashioned.
## install
npm install kyujitai
## Use
```javascript
const Kyujitai = require('kyujitai');
const kyujitai = new Kyujitai((error) => {
kyujitai.encode('旧字体'); //=> '舊字體'
});
```
## Usage
### new Kyujitai([options], [callback])
Constructor.
* `options`: [Object]
* `callback`: [Function(error)] Called when construction completed.
- `error`: [Error] Supplied if construction failed.
### kyujitai.encode(string, [options])
Encode string from shinjitai to kyujitai.
* `string`: [String] Input string
* `options`: [Object]
- `options.IVD`: [Boolean] `true` if you want to allow IVS for the encoded string. Default is false.
* Returns: [String] Output string
```javascript
kyujitai.encode('旧字体'); //=> '舊字體'
kyujitai.encode('画期的図画'); //=> '劃期的圖畫'
kyujitai.encode('弁明'); //=> '辯明'
kyujitai.encode('弁償'); //=> '辨償'
kyujitai.encode('花弁'); //=> '花瓣'
kyujitai.encode('弁髪'); //=> '辮髮'
```
### kyujitai.decode(string, [options])
Decode string from kyujitai to shinjitai.
* `string`: [String] Input string
* `options`: [Object]
* Returns: [String] Output string
|
[
"Low-Resource NLP",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/ikegami-yukino/rakutenma-python
|
2015-01-01T21:40:43Z
|
Rakuten MA (Python version)
|
ikegami-yukino / rakutenma-python
Public
Branches
Tags
Go to file
Go to file
Code
bin
rakutenma
tests
.gitignore
.travis.yml
CHANGES.rst
LICENSE.txt
MANIFEST.in
README.rst
setup.py
travis-ci.org
python
python 2.6 | 2.7 | 3.3 | 3.4 | 3.5 | 3.6
2.6 | 2.7 | 3.3 | 3.4 | 3.5 | 3.6 pypi
pypi v0.3.3
v0.3.3
Code Health license
license Apache Software License
Apache Software License
Rakuten MA Python (morphological analyzer) is a Python version of Rakuten MA (word segmentor + PoS Tagger) for
Chinese and Japanese.
For details about Rakuten MA, See https://github.com/rakuten-nlp/rakutenma
See also http://qiita.com/yukinoi/items/925bc238185aa2fad8a7 (In Japanese)
Contributions are welcome!
pip install rakutenma
About
Rakuten MA (Python version)
# python # nlp # chinese # japanese-language
# word-segmentation # pos-tagging
# part-of-speech-tagger
Readme
Apache-2.0 license
Activity
22 stars
5 watching
1 fork
Report repository
Releases 1
v0.1.1
Latest
on Jan 10, 2015
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
Rakuten MA Python
Installation
Example
from rakutenma import RakutenMA
# Initialize a RakutenMA instance with an empty model
# the default ja feature set is set already
rma = RakutenMA()
README
Apache-2.0 license
As compared to original RakutenMA, following methods are added:
RakutenMA::load(model_path) - Load model from JSON file
RakutenMA::save(model_path) - Save model to path
As initial setting, following values are set:
rma.featset = CTYPE_JA_PATTERNS # RakutenMA.default_featset_ja
rma.hash_func = rma.create_hash_func(15)
rma.tag_scheme = "SBIEO" # if using Chinese, set "IOB2"
Apache License version 2.0
# Let's analyze a sample sentence (from http://tatoeba.org/jpn/sentences/show/103809)
# With a disastrous result, since the model is empty!
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Feed the model with ten sample sentences from tatoeba.com
# "tatoeba.json" is available at https://github.com/rakuten-nlp/rakutenma
import json
tatoeba = json.load(open("tatoeba.json"))
for i in tatoeba:
rma.train_one(i)
# Now what does the result look like?
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Initialize a RakutenMA instance with a pre-trained model
rma = RakutenMA(phi=1024, c=0.007812) # Specify hyperparameter for SCW (for demonstration purpose)
rma.load("model_ja.json")
# Set the feature hash function (15bit)
rma.hash_func = rma.create_hash_func(15)
# Tokenize one sample sentence
print(rma.tokenize("うらにわにはにわにわとりがいる"));
# Re-train the model feeding the right answer (pairs of [token, PoS tag])
res = rma.train_one(
[["うらにわ","N-nc"],
["に","P-k"],
["は","P-rj"],
["にわ","N-n"],
["にわとり","N-nc"],
["が","P-k"],
["いる","V-c"]])
# The result of train_one contains:
# sys: the system output (using the current model)
# ans: answer fed by the user
# update: whether the model was updated
print(res)
# Now what does the result look like?
print(rma.tokenize("うらにわにはにわにわとりがいる"))
NOTE
Added API
misc
LICENSE
Rakuten MA Python (c) 2015- Yukino Ikegami. All Rights Reserved.
Rakuten MA (original) (c) 2014 Rakuten NLP Project. All Rights Reserved.
Copyright
|
Rakuten MA Python
===================
|travis| |coveralls| |pyversion| |version| |landscape| |license|
Rakuten MA Python (morphological analyzer) is a Python version of Rakuten MA (word segmentor + PoS Tagger) for Chinese and Japanese.
For details about Rakuten MA, See https://github.com/rakuten-nlp/rakutenma
See also http://qiita.com/yukinoi/items/925bc238185aa2fad8a7 (In Japanese)
Contributions are welcome!
Installation
==============
::
pip install rakutenma
Example
===========
.. code:: python
from rakutenma import RakutenMA
# Initialize a RakutenMA instance with an empty model
# the default ja feature set is set already
rma = RakutenMA()
# Let's analyze a sample sentence (from http://tatoeba.org/jpn/sentences/show/103809)
# With a disastrous result, since the model is empty!
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Feed the model with ten sample sentences from tatoeba.com
# "tatoeba.json" is available at https://github.com/rakuten-nlp/rakutenma
import json
tatoeba = json.load(open("tatoeba.json"))
for i in tatoeba:
rma.train_one(i)
# Now what does the result look like?
print(rma.tokenize("彼は新しい仕事できっと成功するだろう。"))
# Initialize a RakutenMA instance with a pre-trained model
rma = RakutenMA(phi=1024, c=0.007812) # Specify hyperparameter for SCW (for demonstration purpose)
rma.load("model_ja.json")
# Set the feature hash function (15bit)
rma.hash_func = rma.create_hash_func(15)
# Tokenize one sample sentence
print(rma.tokenize("うらにわにはにわにわとりがいる"));
# Re-train the model feeding the right answer (pairs of [token, PoS tag])
res = rma.train_one(
[["うらにわ","N-nc"],
["に","P-k"],
["は","P-rj"],
["にわ","N-n"],
["にわとり","N-nc"],
["が","P-k"],
["いる","V-c"]])
# The result of train_one contains:
# sys: the system output (using the current model)
# ans: answer fed by the user
# update: whether the model was updated
print(res)
# Now what does the result look like?
print(rma.tokenize("うらにわにはにわにわとりがいる"))
NOTE
===========
Added API
--------------
As compared to original RakutenMA, following methods are added:
- RakutenMA::load(model_path)
- Load model from JSON file
- RakutenMA::save(model_path)
- Save model to path
misc
--------------
As initial setting, following values are set:
- rma.featset = CTYPE_JA_PATTERNS # RakutenMA.default_featset_ja
- rma.hash_func = rma.create_hash_func(15)
- rma.tag_scheme = "SBIEO" # if using Chinese, set "IOB2"
LICENSE
=========
Apache License version 2.0
Copyright
=============
Rakuten MA Python
(c) 2015- Yukino Ikegami. All Rights Reserved.
Rakuten MA (original)
(c) 2014 Rakuten NLP Project. All Rights Reserved.
.. |travis| image:: https://travis-ci.org/ikegami-yukino/rakutenma-python.svg?branch=master
:target: https://travis-ci.org/ikegami-yukino/rakutenma-python
:alt: travis-ci.org
.. |coveralls| image:: https://coveralls.io/repos/ikegami-yukino/rakutenma-python/badge.png
:target: https://coveralls.io/r/ikegami-yukino/rakutenma-python
:alt: coveralls.io
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/rakutenma.svg
.. |version| image:: https://img.shields.io/pypi/v/rakutenma.svg
:target: http://pypi.python.org/pypi/rakutenma/
:alt: latest version
.. |landscape| image:: https://landscape.io/github/ikegami-yukino/rakutenma-python/master/landscape.svg?style=flat
:target: https://landscape.io/github/ikegami-yukino/rakutenma-python/master
:alt: Code Health
.. |license| image:: https://img.shields.io/pypi/l/rakutenma.svg
:target: http://pypi.python.org/pypi/rakutenma/
:alt: license
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/google/budoux
|
2015-03-18T18:22:31Z
|
Standalone. Small. Language-neutral. BudouX is the successor to Budou, the machine learning powered line break organizer tool.
|
google / budoux
Public
Branches
Tags
Go to file
Go to file
Code
.github
budoux
data/finetuning/ja
demo
java
javascript
scripts
tests
.gitignore
.markdownlint.yaml
CONTRIBUTING.md
LICENSE
MANIFEST.in
README.md
bump_version.py
example.png
pyproject.toml
setup.cfg
setup.py
pypi
pypi v0.6.3
v0.6.3 npm
npm v0.6.3
v0.6.3 maven-central
maven-central v0.6.4
v0.6.4
Standalone. Small. Language-neutral.
BudouX is the successor to Budou, the machine learning powered line break organizer tool.
About
google.github.io/budoux/
# javascript # python # nlp # machine-learning
Readme
Apache-2.0 license
Code of conduct
Security policy
Activity
Custom properties
1.4k stars
9 watching
32 forks
Report repository
Releases 19
v0.6.3
Latest
2 weeks ago
+ 18 releases
Packages
No packages published
Used by 217
+ 209
Contributors
13
Languages
Python 53.9%
TypeScript 32.3%
Java 10.4%
JavaScript 3.4%
Code
Issues
9
Pull requests
Actions
Projects
Security
Insights
BudouX
README
Code of conduct
Apache-2.0 license
Security
It is standalone. It works with no dependency on third-party word segmenters such as Google cloud natural language API.
It is small. It takes only around 15 KB including its machine learning model. It's reasonable to use it even on the client-side.
It is language-neutral. You can train a model for any language by feeding a dataset to BudouX’s training script.
Last but not least, BudouX supports HTML inputs.
https://google.github.io/budoux
Japanese
Simplified Chinese
Traditional Chinese
Thai
Korean uses spaces between words, so you can generally prevent words from being split across lines by applying the CSS property
word-break: keep-all to the paragraph, which should be much more performant than installing BudouX. That said, we're happy to
explore dedicated Korean language support if the above solution proves insufficient.
Python
JavaScript
Java
You can get a list of phrases by feeding a sentence to the parser. The easiest way is to get a parser is loading the default parser for
each language.
Japanese:
Demo
Natural languages supported by pretrained models
Korean support?
Supported Programming languages
Python module
Install
$ pip install budoux
Usage
Library
import budoux
parser = budoux.load_default_japanese_parser()
Simplified Chinese:
Traditional Chinese:
Thai:
You can also translate an HTML string to wrap phrases with non-breaking markup. The default parser uses zero-width space (U+200B)
to separate phrases.
Please note that separators are denoted as \u200b in the example above for illustrative purposes, but the actual output is an invisible
string as it's a zero-width space.
If you have a custom model, you can use it as follows.
A model file for BudouX is a JSON file that contains pairs of a feature and its score extracted by machine learning training. Each score
represents the significance of the feature in determining whether to break the sentence at a specific point.
For more details of the JavaScript model, please refer to JavaScript module README.
You can also format inputs on your terminal with budoux command.
print(parser.parse('今日は天気です。'))
# ['今日は', '天気です。']
import budoux
parser = budoux.load_default_simplified_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
import budoux
parser = budoux.load_default_traditional_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
import budoux
parser = budoux.load_default_thai_parser()
print(parser.parse('วันนี้อากาศดี'))
# ['วัน', 'นี้', 'อากาศ', 'ดี']
print(parser.translate_html_string('今日は<b>とても天気</b>です。'))
# <span style="word-break: keep-all; overflow-wrap: anywhere;">今日は<b>\u200bとても\u200b天気</b>です。</span>
with open('/path/to/your/model.json') as f:
model = json.load(f)
parser = budoux.Parser(model)
CLI
$ budoux 本日は晴天です。 # default: japanese
本日は
晴天です。
$ budoux -l ja 本日は晴天です。
本日は
晴天です。
$ budoux -l zh-hans 今天天气晴朗。
今天
天气
晴朗。
$ budoux -l zh-hant 今天天氣晴朗。
今天
天氣
晴朗。
Please note that separators are denoted as \u200b in the example above for illustrative purposes, but the actual output is an invisible
string as it's a zero-width space.
If you want to see help, run budoux -h .
BudouX supports HTML inputs and outputs HTML strings with markup that wraps phrases, but it's not meant to be used as an HTML
sanitizer. BudouX doesn't sanitize any inputs. Malicious HTML inputs yield malicious HTML outputs. Please use it with an
appropriate sanitizer library if you don't trust the input.
English text has many clues, like spacing and hyphenation, that enable beautiful and readable line breaks. However, some CJK
languages lack these clues, and so are notoriously more difficult to process. Line breaks can occur randomly and usually in the middle
of a word or a phrase without a more careful approach. This is a long-standing issue in typography on the Web, which results in a
degradation of readability.
Budou was proposed as a solution to this problem in 2016. It automatically translates CJK sentences into HTML with lexical phrases
wrapped in non-breaking markup, so as to semantically control line breaks. Budou has solved this problem to some extent, but it still
has some problems integrating with modern web production workflow.
$ budoux -l th วันนี้อากาศดี
วัน
นี้
อากาศ
ดี
$ echo $'本日は晴天です。\n明日は曇りでしょう。' | budoux
本日は
晴天です。
---
明日は
曇りでしょう。
$ budoux 本日は晴天です。 -H
<span style="word-break: keep-all; overflow-wrap: anywhere;">本日は\u200b晴天です。</span>
$ budoux -h
usage: budoux [-h] [-H] [-m JSON | -l LANG] [-d STR] [-V] [TXT]
BudouX is the successor to Budou,
the machine learning powered line break organizer tool.
positional arguments:
TXT text (default: None)
optional arguments:
-h, --help show this help message and exit
-H, --html HTML mode (default: False)
-m JSON, --model JSON custom model file path (default: /path/to/budoux/models/ja.json)
-l LANG, --lang LANG language of custom model (default: None)
-d STR, --delim STR output delimiter in TEXT mode (default: ---)
-V, --version show program's version number and exit
supported languages of `-l`, `--lang`:
- ja
- zh-hans
- zh-hant
- th
Caveat
Background
The biggest barrier in applying Budou to a website is that it has dependency on third-party word segmenters. Usually a word
segmenter is a large program that is infeasible to download for every web page request. It would also be an undesirable option making
a request to a cloud-based word segmentation service for every sentence, considering the speed and cost. That’s why we need a
standalone line break organizer tool equipped with its own segmentation engine small enough to be bundled in a client-side JavaScript
code.
BudouX is the successor to Budou, designed to be integrated with your website with no hassle.
BudouX uses the AdaBoost algorithm to segment a sentence into phrases by considering the task as a binary classification problem to
predict whether to break or not between all characters. It uses features such as the characters around the break point, their Unicode
blocks, and combinations of them to make a prediction. The output machine learning model, which is encoded as a JSON file, stores
pairs of the feature and its significance score. The BudouX parser takes a model file to construct a segmenter and translates input
sentences into a list of phrases.
You can build your own custom model for any language by preparing training data in the target language. A training dataset is a large
text file that consists of sentences separated by phrases with the separator symbol "▁" (U+2581) like below.
Assuming the text file is saved as mysource.txt , you can build your own custom model by running the following commands.
Please note that train.py takes time to complete depending on your computer resources. Good news is that the training algorithm is
an anytime algorithm, so you can get a weights file even if you interrupt the execution. You can build a valid model file by passing that
weights file to build_model.py even in such a case.
The default model for Japanese ( budoux/models/ja.json ) is built using the KNBC corpus. You can create a training dataset, which
we name source_knbc.txt below for example, from the corpus by running the following commands:
How it works
Building a custom model
私は▁遅刻魔で、▁待ち合わせに▁いつも▁遅刻してしまいます。
メールで▁待ち合わせ▁相手に▁一言、▁「ごめんね」と▁謝れば▁どうにか▁なると▁思っていました。
海外では▁ケータイを▁持っていない。
$ pip install .[dev]
$ python scripts/encode_data.py mysource.txt -o encoded_data.txt
$ python scripts/train.py encoded_data.txt -o weights.txt
$ python scripts/build_model.py weights.txt -o mymodel.json
Constructing a training dataset from the KNBC corpus for Japanese
$ curl -o knbc.tar.bz2 https://nlp.ist.i.kyoto-u.ac.jp/kuntt/KNBC_v1.0_090925_utf8.tar.bz2
|
<!-- markdownlint-disable MD014 -->
# BudouX
[](https://pypi.org/project/budoux/) [](https://www.npmjs.com/package/budoux) [](https://mvnrepository.com/artifact/com.google.budoux/budoux)
Standalone. Small. Language-neutral.
BudouX is the successor to [Budou](https://github.com/google/budou), the machine learning powered line break organizer tool.

It is **standalone**. It works with no dependency on third-party word segmenters such as Google cloud natural language API.
It is **small**. It takes only around 15 KB including its machine learning model. It's reasonable to use it even on the client-side.
It is **language-neutral**. You can train a model for any language by feeding a dataset to BudouX’s training script.
Last but not least, BudouX supports HTML inputs.
## Demo
<https://google.github.io/budoux>
## Natural languages supported by pretrained models
- Japanese
- Simplified Chinese
- Traditional Chinese
- Thai
### Korean support?
Korean uses spaces between words, so you can generally prevent words from being split across lines by applying the CSS property `word-break: keep-all` to the paragraph, which should be much more performant than installing BudouX.
That said, we're happy to explore dedicated Korean language support if the above solution proves insufficient.
## Supported Programming languages
- Python
- [JavaScript](https://github.com/google/budoux/tree/main/javascript/)
- [Java](https://github.com/google/budoux/tree/main/java/)
## Python module
### Install
```shellsession
$ pip install budoux
```
### Usage
#### Library
You can get a list of phrases by feeding a sentence to the parser.
The easiest way is to get a parser is loading the default parser for each language.
**Japanese:**
```python
import budoux
parser = budoux.load_default_japanese_parser()
print(parser.parse('今日は天気です。'))
# ['今日は', '天気です。']
```
**Simplified Chinese:**
```python
import budoux
parser = budoux.load_default_simplified_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
```
**Traditional Chinese:**
```python
import budoux
parser = budoux.load_default_traditional_chinese_parser()
print(parser.parse('今天是晴天。'))
# ['今天', '是', '晴天。']
```
**Thai:**
```python
import budoux
parser = budoux.load_default_thai_parser()
print(parser.parse('วันนี้อากาศดี'))
# ['วัน', 'นี้', 'อากาศ', 'ดี']
```
You can also translate an HTML string to wrap phrases with non-breaking markup.
The default parser uses zero-width space (U+200B) to separate phrases.
```python
print(parser.translate_html_string('今日は<b>とても天気</b>です。'))
# <span style="word-break: keep-all; overflow-wrap: anywhere;">今日は<b>\u200bとても\u200b天気</b>です。</span>
```
Please note that separators are denoted as `\u200b` in the example above for
illustrative purposes, but the actual output is an invisible string as it's a
zero-width space.
If you have a custom model, you can use it as follows.
```python
with open('/path/to/your/model.json') as f:
model = json.load(f)
parser = budoux.Parser(model)
```
A model file for BudouX is a JSON file that contains pairs of a feature and its score extracted by machine learning training.
Each score represents the significance of the feature in determining whether to break the sentence at a specific point.
For more details of the JavaScript model, please refer to [JavaScript module README](https://github.com/google/budoux/tree/main/javascript/README.md).
#### CLI
You can also format inputs on your terminal with `budoux` command.
```shellsession
$ budoux 本日は晴天です。 # default: japanese
本日は
晴天です。
$ budoux -l ja 本日は晴天です。
本日は
晴天です。
$ budoux -l zh-hans 今天天气晴朗。
今天
天气
晴朗。
$ budoux -l zh-hant 今天天氣晴朗。
今天
天氣
晴朗。
$ budoux -l th วันนี้อากาศดี
วัน
นี้
อากาศ
ดี
```
```shellsession
$ echo $'本日は晴天です。\n明日は曇りでしょう。' | budoux
本日は
晴天です。
---
明日は
曇りでしょう。
```
```shellsession
$ budoux 本日は晴天です。 -H
<span style="word-break: keep-all; overflow-wrap: anywhere;">本日は\u200b晴天です。</span>
```
Please note that separators are denoted as `\u200b` in the example above for
illustrative purposes, but the actual output is an invisible string as it's a
zero-width space.
If you want to see help, run `budoux -h`.
```shellsession
$ budoux -h
usage: budoux [-h] [-H] [-m JSON | -l LANG] [-d STR] [-V] [TXT]
BudouX is the successor to Budou,
the machine learning powered line break organizer tool.
positional arguments:
TXT text (default: None)
optional arguments:
-h, --help show this help message and exit
-H, --html HTML mode (default: False)
-m JSON, --model JSON custom model file path (default: /path/to/budoux/models/ja.json)
-l LANG, --lang LANG language of custom model (default: None)
-d STR, --delim STR output delimiter in TEXT mode (default: ---)
-V, --version show program's version number and exit
supported languages of `-l`, `--lang`:
- ja
- zh-hans
- zh-hant
- th
```
## Caveat
BudouX supports HTML inputs and outputs HTML strings with markup that wraps phrases, but it's not meant to be used as an HTML sanitizer. **BudouX doesn't sanitize any inputs.** Malicious HTML inputs yield malicious HTML outputs. Please use it with an appropriate sanitizer library if you don't trust the input.
## Background
English text has many clues, like spacing and hyphenation, that enable beautiful and readable line breaks. However, some CJK languages lack these clues, and so are notoriously more difficult to process. Line breaks can occur randomly and usually in the middle of a word or a phrase without a more careful approach. This is a long-standing issue in typography on the Web, which results in a degradation of readability.
Budou was proposed as a solution to this problem in 2016. It automatically translates CJK sentences into HTML with lexical phrases wrapped in non-breaking markup, so as to semantically control line breaks. Budou has solved this problem to some extent, but it still has some problems integrating with modern web production workflow.
The biggest barrier in applying Budou to a website is that it has dependency on third-party word segmenters. Usually a word segmenter is a large program that is infeasible to download for every web page request. It would also be an undesirable option making a request to a cloud-based word segmentation service for every sentence, considering the speed and cost. That’s why we need a standalone line break organizer tool equipped with its own segmentation engine small enough to be bundled in a client-side JavaScript code.
Budou*X* is the successor to Budou, designed to be integrated with your website with no hassle.
## How it works
BudouX uses the [AdaBoost algorithm](https://en.wikipedia.org/wiki/AdaBoost) to segment a sentence into phrases by considering the task as a binary classification problem to predict whether to break or not between all characters. It uses features such as the characters around the break point, their Unicode blocks, and combinations of them to make a prediction. The output machine learning model, which is encoded as a JSON file, stores pairs of the feature and its significance score. The BudouX parser takes a model file to construct a segmenter and translates input sentences into a list of phrases.
## Building a custom model
You can build your own custom model for any language by preparing training data in the target language.
A training dataset is a large text file that consists of sentences separated by phrases with the separator symbol "▁" (U+2581) like below.
```text
私は▁遅刻魔で、▁待ち合わせに▁いつも▁遅刻してしまいます。
メールで▁待ち合わせ▁相手に▁一言、▁「ごめんね」と▁謝れば▁どうにか▁なると▁思っていました。
海外では▁ケータイを▁持っていない。
```
Assuming the text file is saved as `mysource.txt`, you can build your own custom model by running the following commands.
```shellsession
$ pip install .[dev]
$ python scripts/encode_data.py mysource.txt -o encoded_data.txt
$ python scripts/train.py encoded_data.txt -o weights.txt
$ python scripts/build_model.py weights.txt -o mymodel.json
```
Please note that `train.py` takes time to complete depending on your computer resources.
Good news is that the training algorithm is an [anytime algorithm](https://en.wikipedia.org/wiki/Anytime_algorithm), so you can get a weights file even if you interrupt the execution. You can build a valid model file by passing that weights file to `build_model.py` even in such a case.
## Constructing a training dataset from the KNBC corpus for Japanese
The default model for Japanese (`budoux/models/ja.json`) is built using the [KNBC corpus](https://nlp.ist.i.kyoto-u.ac.jp/kuntt/).
You can create a training dataset, which we name `source_knbc.txt` below for example, from the corpus by running the following commands:
```shellsession
$ curl -o knbc.tar.bz2 https://nlp.ist.i.kyoto-u.ac.jp/kuntt/KNBC_v1.0_090925_utf8.tar.bz2
$ tar -xf knbc.tar.bz2 # outputs KNBC_v1.0_090925_utf8 directory
$ python scripts/prepare_knbc.py KNBC_v1.0_090925_utf8 -o source_knbc.txt
```
## Author
[Shuhei Iitsuka](https://tushuhei.com)
## Disclaimer
This is not an officially supported Google product.
|
[
"Chunking",
"Syntactic Text Processing",
"Text Segmentation"
] |
[] |
true |
https://github.com/scriptin/topokanji
|
2015-05-28T17:52:28Z
|
Topologically ordered lists of kanji for effective learning
|
scriptin / topokanji
Public
Branches
Tags
Go to file
Go to file
Code
data
depen…
lib
lists
.gitignore
.jshintrc
READ…
build.js
packag…
30 seconds explanation for people who want to
learn kanji:
It is best to learn kanji starting from simple
characters and then learning complex ones as
compositions of "parts", which are called "radicals"
or "components". For example:
一 → 二 → 三
丨 → 凵 → 山 → 出
言 → 五 → 口 → 語
It is also smart to learn more common kanji first.
About
Topologically ordered lists of kanji
for effective learning
# language # data # japanese
# language-learning # cjk # kanji
# japanese-language # frequency-lists
# radical # memorization # cjk-characters
# kanji-frequency # kanjivg
Readme
Activity
180 stars
10 watching
21 forks
Report repository
Releases
1 tags
Packages
No packages published
Languages
JavaScript 100.0%
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
TopoKanji
README
This project is based on those two ideas and
provides properly ordered lists of kanji to make
your learning process as fast, simple, and effective
as possible.
Motivation for this project initially came from reading this
article: The 5 Biggest Mistakes People Make When
Learning Kanji.
First 100 kanji from lists/aozora.txt (formatted for
convenience):
These lists can be found in lists directory. They only
differ in order of kanji. Each file contains a list of kanji,
ordered as described in following sections. There are
few options (see Used data for details):
aozora.(json|txt) - ordered by kanji frequency
in Japanese fiction and non-fiction books; I
recommend this list if you're starting to learn kanji
news.(json|txt) - ordered by kanji frequency in
online news
twitter.(json|txt) - ordered by kanji frequency
in Twitter messages
wikipedia.(json|txt) - ordered by kanji
frequency in Wikipedia articles
all.(json|txt) - combined "average" version of
all previous; this one is experimental, I don't
recommend using it
You can use these lists to build an Anki deck or just as a
guidance. If you're looking for "names" or meanings of
kanji, you might want to check my kanji-keys project.
人一丨口日目儿見凵山
出十八木未丶来大亅了
子心土冂田思二丁彳行
寸寺時卜上丿刀分厶禾
私中彐尹事可亻何自乂
又皮彼亠方生月門間扌
手言女本乙气気干年三
耂者刂前勹勿豕冖宀家
今下白勺的云牛物立小
文矢知入乍作聿書学合
If you look at a kanji like 語, you can see it consists of at
least three distinct parts: 言, 五, 口. Those are kanji by
themselves too. The idea behind this project is to find
the order of about 2000-2500 common kanji, in which no
kanji appears before its' parts, so you only learn a new
kanji when you already know its' components.
1. No kanji appear before it's parts (components).
In fact, in you treat kanji as nodes in a graph
structure, and connect them with directed edges,
where each edge means "kanji A includes kanji B as
a component", it all forms a directed acyclic graph
(DAG). For any DAG, it is possible to build a
topological order, which is basically what "no kanji
appear before it's parts" means.
2. More common kanji come first. That way you
learn useful characters as soon as possible.
Topological sorting is done by using a modified version
of Kahn (1962) algorithm with intermediate sorting step
which deals with the second property above. This
intermediate sorting uses the "weight" of each character:
common kanji (lighter) tend appear before rare kanji
(heavier). See source code for details.
Initial unsorted list contains only kanji which are present
in KanjiVG project, so for each character there is a data
of its' shape and stroke order.
Characters are split into components using CJK
Decompositions Data project, along with "fixes" to
simplify final lists and avoid characters which are not
present in initial list.
Statistical data of kanji usage frequencies was collected
by processing raw textual data from various sources.
See kanji-frequency repository for details.
What is a properly ordered list of
kanji?
Properties of properly ordered lists
Algorithm
Used data
Kanji list covers about 95-99% of kanji found in various
Japanese texts. Generally, the goal is provide something
similar to Jōyō kanji, but based on actual data. Radicals
are also included, but only those which are parts of
some kanji in the list.
Kanji/radical must NOT appear in this list if it is:
not included in KanjiVG character set
primarily used in names (people, places, etc.) or in
some specific terms (religion, mythology, etc.)
mostly used because of its' shape, e.g. a part of text
emoticons/kaomoji like ( ^ω^)个
a part of currently popular meme,
manga/anime/dorama/movie title, #hashtag, etc.,
and otherwise is not commonly used
Files in lists directory are final lists.
*.txt files contain lists as plain text, one
character per line; those files can be interpreted as
CSV/TSV files with a single column
*.json files contain lists as JSON arrays
All files are encoded in UTF-8, without byte order mark
(BOM), and have unix-style line endings, LF .
Files in dependencies directory are "flat" equivalents of
CJK-decompositions (see below). "Dependency" here
roughly means "a component of the visual
decomposition" for kanji.
1-to-1.txt has a format compatible with tsort
command line utility; first character in each line is
"target" kanji, second character is target's
dependency or 0
1-to-1.json contains a JSON array with the
same data as in 1-to-1.txt
Which kanji are (not) included?
Files and formats
lists directory
dependencies directory
1-to-N.txt is similar, but lists all "dependecies" at
once
1-to-N.json contains a JSON object with the
same data as in 1-to-N.txt
All files are encoded in UTF-8, without byte order mark
(BOM), and have unix-style line endings, LF .
kanji.json - data for kanji included in final
ordered lists, including radicals
kanjivg.txt - list of kanji from KanjiVG
cjk-decomp-{VERSION}.txt - data from CJK
Decompositions Data, without any modifications
cjk-decomp-override.txt - data to override some
CJK's decompositions
kanji-frequency/*.json - kanji frequency tables
All files are encoded in UTF-8, without byte order mark
(BOM). All files, except for cjk-decomp-{VERSION}.txt ,
have unix-style line endings, LF .
Contains table with data for kanji, including radicals.
Columns are:
1. Character itself
2. Stroke count
3. Frequency flag:
true if it is a common kanji
false if it is primarily used as a
radical/component and unlikely to be seen
within top 3000 in kanji usage frequency tables.
In this case character is only listed because it's
useful for decomposition, not as a standalone
kanji
Resrictions:
No duplicates
Each character must be listed in kanjivg.txt
Each character must be listed on the left hand side
in exactly one line in cjk-decomp-{VERSION}.txt
Each character may be listed on the left hand side
in exactly one line in cjk-decomp-override.txt
data directory
data/kanji.json
Simple list of characters which are present in KanjiVG
project. Those are from the list of *.svg files in
KanjiVG's Github repository.
Data file from CJK Decompositions Data project, see
description of its' format.
Same format as cjk-decomp-{VERSION}.txt , except:
comments starting with # allowed
purpose of each record in this file is to override the
one from cjk-decomp-{VERSION}.txt
type of decomposition is always fix , which just
means "fix a record for the same character from
original file"
Special character 0 is used to distinguish invalid
decompositions (which lead to characters with no
graphical representation) from those which just can't be
decomposed further into something meaningful. For
example, 一:fix(0) means that this kanji can't be
further decomposed, since it's just a single stroke.
NOTE: Strictly speaking, records in this file are not
always "visual decompositions" (but most of them are).
Instead, it's just an attempt to provide meaningful
recommendations of kanji learning order.
See kanji-frequency repository for details.
You must have Node.js and Git installed
1. git clone https://github.com/THIS/REPO.git
2. npm install
3. node build.js + commands and arguments
described below
data/kanjivg.txt
data/cjk-decomp-{VERSION}.txt
data/cjk-decomp-override.txt
data/kanji-frequency/*.json
Usage
Command-line commands and arguments
show - only display sorted list without writing into
files
(optional) --per-line=NUM - explicitly tell how
many characters per line to display. 50 by
default. Applicable only to (no arguments)
(optional) --freq-table=TABLE_NAME - use
only one frequency table. Table names are file
names from data/kanji-frequency directory,
without .json extension, e.g. all
("combined" list), aozora , etc. When omitted,
all frequency tables are used
coverage - show tables coverage, i.e. which
fraction of characters from each frequency table is
included into kanji list
suggest-add - suggest kanji to add in a list, based
on coverage within kanji usage frequency tables
(required) --num=NUM - how many
(optional) --mean-type=MEAN_TYPE - same as
previous, sort by given mean type:
arithmetic (most "extreme"), geometric ,
harmonic (default, most "conservative"). See
Pythagorean means for details
suggest-remove - suggest kanji to remove from a
list, reverse of suggest-add
(required) --num=NUM - see above
(optional) --mean-type=MEAN_TYPE - see
above
save - update files with final lists
This is a multi-license project. Choose any license from
this list:
Apache-2.0 or any later version
License
|
# TopoKanji
> **30 seconds explanation for people who want to learn kanji:**
>
> It is best to learn kanji starting from simple characters and then learning complex ones as compositions of "parts", which are called "radicals" or "components". For example:
>
> - 一 → 二 → 三
> - 丨 → 凵 → 山 → 出
> - 言 → 五 → 口 → 語
>
> It is also smart to learn more common kanji first.
>
> This project is based on those two ideas and provides properly ordered lists of kanji to make your learning process as fast, simple, and effective as possible.
Motivation for this project initially came from reading this article: [The 5 Biggest Mistakes People Make When Learning Kanji][mistakes].
First 100 kanji from [lists/aozora.txt](lists/aozora.txt) (formatted for convenience):
人一丨口日目儿見凵山
出十八木未丶来大亅了
子心土冂田思二丁彳行
寸寺時卜上丿刀分厶禾
私中彐尹事可亻何自乂
又皮彼亠方生月門間扌
手言女本乙气気干年三
耂者刂前勹勿豕冖宀家
今下白勺的云牛物立小
文矢知入乍作聿書学合
These lists can be found in [`lists` directory](lists). They only differ in order of kanji. Each file contains a list of kanji, ordered as described in following sections. There are few options (see [Used data](#used-data) for details):
- `aozora.(json|txt)` - ordered by kanji frequency in Japanese fiction and non-fiction books; I recommend this list if you're starting to learn kanji
- `news.(json|txt)` - ordered by kanji frequency in online news
- `twitter.(json|txt)` - ordered by kanji frequency in Twitter messages
- `wikipedia.(json|txt)` - ordered by kanji frequency in Wikipedia articles
- `all.(json|txt)` - combined "average" version of all previous; this one is experimental, I don't recommend using it
You can use these lists to build an [Anki][] deck or just as a guidance. If you're looking for "names" or meanings of kanji, you might want to check my [kanji-keys](https://github.com/scriptin/kanji-keys) project.
## What is a properly ordered list of kanji?
If you look at a kanji like 語, you can see it consists of at least three distinct parts: 言, 五, 口. Those are kanji by themselves too. The idea behind this project is to find the order of about 2000-2500 common kanji, in which no kanji appears before its' parts, so you only learn a new kanji when you already know its' components.
### Properties of properly ordered lists
1. **No kanji appear before it's parts (components).** In fact, in you treat kanji as nodes in a [graph][] structure, and connect them with directed edges, where each edge means "kanji A includes kanji B as a component", it all forms a [directed acyclic graph (DAG)][dag]. For any DAG, it is possible to build a [topological order][topsort], which is basically what "no kanji appear before it's parts" means.
2. **More common kanji come first.** That way you learn useful characters as soon as possible.
### Algorithm
[Topological sorting][topsort] is done by using a modified version of [Kahn (1962) algorithm][kahn] with intermediate sorting step which deals with the second property above. This intermediate sorting uses the "weight" of each character: common kanji (lighter) tend appear before rare kanji (heavier). See source code for details.
## Used data
Initial unsorted list contains only kanji which are present in [KanjiVG][] project, so for each character there is a data of its' shape and stroke order.
Characters are split into components using [CJK Decompositions Data][cjk] project, along with "fixes" to simplify final lists and avoid characters which are not present in initial list.
Statistical data of kanji usage frequencies was collected by processing raw textual data from various sources. See [kanji-frequency][] repository for details.
## Which kanji are (not) included?
Kanji list covers about 95-99% of kanji found in various Japanese texts. Generally, the goal is provide something similar to [Jōyō kanji][jouyou], but based on actual data. Radicals are also included, but only those which are parts of some kanji in the list.
Kanji/radical must **NOT** appear in this list if it is:
- not included in KanjiVG character set
- primarily used in names (people, places, etc.) or in some specific terms (religion, mythology, etc.)
- mostly used because of its' shape, e.g. a part of text emoticons/kaomoji like `( ^ω^)个`
- a part of currently popular meme, manga/anime/dorama/movie title, #hashtag, etc., and otherwise is not commonly used
## Files and formats
### `lists` directory
Files in `lists` directory are final lists.
- `*.txt` files contain lists as plain text, one character per line; those files can be interpreted as CSV/TSV files with a single column
- `*.json` files contain lists as [JSON][] arrays
All files are encoded in UTF-8, without [byte order mark (BOM)][bom], and have unix-style [line endings][eol], `LF`.
### `dependencies` directory
Files in `dependencies` directory are "flat" equivalents of CJK-decompositions (see below). "Dependency" here roughly means "a component of the visual decomposition" for kanji.
- `1-to-1.txt` has a format compatible with [tsort][] command line utility; first character in each line is "target" kanji, second character is target's dependency or `0`
- `1-to-1.json` contains a JSON array with the same data as in `1-to-1.txt`
- `1-to-N.txt` is similar, but lists all "dependecies" at once
- `1-to-N.json` contains a JSON object with the same data as in `1-to-N.txt`
All files are encoded in UTF-8, without [byte order mark (BOM)][bom], and have unix-style [line endings][eol], `LF`.
### `data` directory
- `kanji.json` - data for kanji included in final ordered lists, including [radicals][kangxi]
- `kanjivg.txt` - list of kanji from [KanjiVG][]
- `cjk-decomp-{VERSION}.txt` - data from [CJK Decompositions Data][cjk], without any modifications
- `cjk-decomp-override.txt` - data to override some CJK's decompositions
- `kanji-frequency/*.json` - kanji frequency tables
All files are encoded in UTF-8, without [byte order mark (BOM)][bom]. All files, except for `cjk-decomp-{VERSION}.txt`, have unix-style [line endings][eol], `LF`.
#### `data/kanji.json`
Contains table with data for kanji, including radicals. Columns are:
1. Character itself
2. Stroke count
3. Frequency flag:
- `true` if it is a common kanji
- `false` if it is primarily used as a radical/component and unlikely to be seen within top 3000 in kanji usage frequency tables. In this case character is only listed because it's useful for decomposition, not as a standalone kanji
Resrictions:
- No duplicates
- Each character must be listed in `kanjivg.txt`
- Each character must be listed on the left hand side in exactly one line in `cjk-decomp-{VERSION}.txt`
- Each character *may* be listed on the left hand side in exactly one line in `cjk-decomp-override.txt`
#### `data/kanjivg.txt`
Simple list of characters which are present in KanjiVG project. Those are from the list of `*.svg` files in [KanjiVG's Github repository][kanjivg-github].
#### `data/cjk-decomp-{VERSION}.txt`
Data file from [CJK Decompositions Data][cjk] project, see [description of its' format][cjk-format].
#### `data/cjk-decomp-override.txt`
Same format as `cjk-decomp-{VERSION}.txt`, except:
- comments starting with `#` allowed
- purpose of each record in this file is to override the one from `cjk-decomp-{VERSION}.txt`
- type of decomposition is always `fix`, which just means "fix a record for the same character from original file"
Special character `0` is used to distinguish invalid decompositions (which lead to characters with no graphical representation) from those which just can't be decomposed further into something meaningful. For example, `一:fix(0)` means that this kanji can't be further decomposed, since it's just a single stroke.
NOTE: Strictly speaking, records in this file are not always "visual decompositions" (but most of them are). Instead, it's just an attempt to provide meaningful recommendations of kanji learning order.
#### `data/kanji-frequency/*.json`
See [kanji-frequency][] repository for details.
## Usage
You must have Node.js and Git installed
1. `git clone https://github.com/THIS/REPO.git`
2. `npm install`
3. `node build.js` + commands and arguments described below
### Command-line commands and arguments
- `show` - only display sorted list without writing into files
- (optional) `--per-line=NUM` - explicitly tell how many characters per line to display. `50` by default. Applicable only to (no arguments)
- (optional) `--freq-table=TABLE_NAME` - use only one frequency table. Table names are file names from `data/kanji-frequency` directory, without `.json` extension, e.g. `all` ("combined" list), `aozora`, etc. When omitted, all frequency tables are used
- `coverage` - show tables coverage, i.e. which fraction of characters from each frequency table is included into kanji list
- `suggest-add` - suggest kanji to add in a list, based on coverage within kanji usage frequency tables
- (required) `--num=NUM` - how many
- (optional) `--mean-type=MEAN_TYPE` - same as previous, sort by given mean type: `arithmetic` (most "extreme"), `geometric`, `harmonic` (default, most "conservative"). See [Pythagorean means][mean-type] for details
- `suggest-remove` - suggest kanji to remove from a list, reverse of `suggest-add`
- (required) `--num=NUM` - see above
- (optional) `--mean-type=MEAN_TYPE` - see above
- `save` - update files with final lists
## License
This is a multi-license project. Choose any license from this list:
- [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) or any later version
- [CC-BY-4.0](http://creativecommons.org/licenses/by/4.0/) or any later version
- [EPL-1.0](https://www.eclipse.org/legal/epl-v10.html) or any later version
- [LGPL-3.0](http://www.gnu.org/licenses/lgpl-3.0.html) or any later version
- [MIT](http://opensource.org/licenses/MIT)
[mistakes]: http://www.tofugu.com/2010/03/25/the-5-biggest-mistakes-people-make-when-learning-kanji/
[anki]: http://ankisrs.net/
[graph]: https://en.wikipedia.org/wiki/Graph_(mathematics)
[dag]: https://en.wikipedia.org/wiki/Directed_acyclic_graph
[topsort]: https://en.wikipedia.org/wiki/Topological_sorting
[tsort]: https://en.wikipedia.org/wiki/Tsort
[kahn]: http://dl.acm.org/citation.cfm?doid=368996.369025
[wiki-dumps]: https://dumps.wikimedia.org/
[jawiki]: https://dumps.wikimedia.org/jawiki/
[aozora]: http://www.aozora.gr.jp/
[twitter-stream]: https://dev.twitter.com/streaming/overview
[twitter-bot]: https://github.com/scriptin/twitter-kanji-frequency
[jouyou]: https://en.wikipedia.org/wiki/J%C5%8Dy%C5%8D_kanji
[kangxi]: https://en.wikipedia.org/wiki/Kangxi_radical
[kanjivg]: http://kanjivg.tagaini.net/
[kanjivg-github]: https://github.com/KanjiVG/kanjivg
[cjk]: https://cjkdecomp.codeplex.com/
[cjk-format]: https://cjkdecomp.codeplex.com/wikipage?title=cjk-decomp
[json]: http://json.org/
[bom]: https://en.wikipedia.org/wiki/Byte_order_mark
[eol]: https://en.wikipedia.org/wiki/Newline
[mean-type]: https://en.wikipedia.org/wiki/Pythagorean_means
[kanji-frequency]: https://github.com/scriptin/kanji-frequency
|
[] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/Kensuke-Mitsuzawa/JapaneseTokenizers
|
2015-09-01T10:24:45Z
|
A set of metrics for feature selection from text data
|
Kensuke-Mitsuzawa / JapaneseTokenizers
Public
Branches
Tags
Go to file
Go to file
Code
Japan…
exampl…
test
.gitignore
.travis.…
LICEN…
MANIF…
Makefile
READ…
install_…
setup.py
travis-…
license
license MIT
MIT
Build Status
This is simple python-wrapper for Japanese Tokenizers(A.K.A Tokenizer)
This project aims to call tokenizers and split a sentence into tokens as easy as possible.
And, this project supports various Tokenization tools common interface. Thus, it's easy to
compare output from various tokenizers.
This project is available also in Github.
About
aim to use JapaneseTokenizer as
easy as possible
# nlp # tokenizer # japanese-language
# mecab # juman # kytea
# mecab-neologd-dictionary
# dictionary-extension # jumanpp
Readme
MIT license
Activity
138 stars
4 watching
20 forks
Report repository
Releases 20
Possible to call jumandi…
Latest
on Mar 26, 2019
+ 19 releases
Packages
No packages published
Contributors
3
Languages
Code
Issues
8
Pull requests
Actions
Projects
Wiki
Security
Ins
What's this?
README
MIT license
If you find any bugs, please report them to github issues. Or any pull requests are welcomed!
Python 2.7
Python 3.x
checked in 3.5, 3.6, 3.7
simple/common interface among various tokenizers
simple/common interface for filtering with stopwords or Part-of-Speech condition
simple interface to add user-dictionary(mecab only)
Mecab is open source tokenizer system for various language(if you have dictionary for it)
See english documentation for detail
Juman is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman is strong for ambiguous writing style in Japanese, and is strong for new-comming words
thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
Juman++ is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman++ is succeeding system of Juman. It adopts RNN model for tokenization.
Juman++ is strong for ambigious writing style in Japanese, and is strong for new-comming words
thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
Note: New Juman++ dev-version(later than 2.x) is available at Github
Kytea is tokenizer tool developped by Graham Neubig.
Python 92.5%
Dockerfile 3.6%
Shell 3.6%
Makefile 0.3%
Requirements
Features
Supported Tokenizers
Mecab
Juman
Juman++
Kytea
Kytea has a different algorithm from one of Mecab or Juman.
See here to install MeCab system.
Mecab-neologd dictionary is a dictionary-extension based on ipadic-dictionary, which is basic
dictionary of Mecab.
With, Mecab-neologd dictionary, you're able to parse new-coming words make one token.
Here, new-coming words is such like, movie actor name or company name.....
See here and install mecab-neologd dictionary.
GCC version must be >= 5
Setting up
Tokenizers auto-install
make install
mecab-neologd dictionary auto-install
make install_neologd
Tokenizers manual-install
MeCab
Mecab Neologd dictionary
Juman
wget -O juman7.0.1.tar.bz2 "http://nlp.ist.i.kyoto-
u.ac.jp/DLcounter/lime.cgi?down=http://nlp.ist.i.kyoto-u.ac.jp/nl-
resource/juman/juman-7.01.tar.bz2&name=juman-7.01.tar.bz2"
bzip2 -dc juman7.0.1.tar.bz2 | tar xvf -
cd juman-7.01
./configure
make
[sudo] make install
Juman++
Install Kytea system
Kytea has python wrapper thanks to michiaki ariga. Install Kytea-python wrapper
During install, you see warning message when it fails to install pyknp or kytea .
if you see these messages, try to re-install these packages manually.
Tokenization Example(For python3.x. To see exmaple code for Python2.x, plaese see here)
wget http://lotus.kuee.kyoto-u.ac.jp/nl-resource/jumanpp/jumanpp-
1.02.tar.xz
tar xJvf jumanpp-1.02.tar.xz
cd jumanpp-1.02/
./configure
make
[sudo] make install
Kytea
wget http://www.phontron.com/kytea/download/kytea-0.4.7.tar.gz
tar -xvf kytea-0.4.7.tar
cd kytea-0.4.7
./configure
make
make install
pip install kytea
install
[sudo] python setup.py install
Note
Usage
import JapaneseTokenizer
input_sentence = '10日放送の「中居正広のミになる図書館」(テレビ朝日系)で、SMAPの中
居正広が、篠原信一の過去の勘違いを明かす一幕があった。'
# ipadic is well-maintained dictionary #
mecab_wrapper = JapaneseTokenizer.MecabWrapper(dictType='ipadic')
print(mecab_wrapper.tokenize(input_sentence).convert_list_object())
# neologd is automatically-generated dictionary from huge web-corpus #
Mecab, Juman, Kytea have different system of Part-of-Speech(POS).
You can check tables of Part-of-Speech(POS) here
natto-py is sophisticated package for tokenization. It supports following features
easy interface for tokenization
importing additional dictionary
partial parsing mode
MIT license
You could build an environment which has dependencies to test this package.
Simply, you build docker image and run docker container.
Develop environment is defined with test/docker-compose-dev.yml .
With the docker-compose.yml file, you could call python2.7 or python3.7
mecab_neologd_wrapper = JapaneseTokenizer.MecabWrapper(dictType='neologd')
print(mecab_neologd_wrapper.tokenize(input_sentence).convert_list_object())
Filtering example
import JapaneseTokenizer
# with word filtering by stopword & part-of-speech condition #
print(mecab_wrapper.tokenize(input_sentence).filter(stopwords=['テレビ朝
日'], pos_condition=[('名詞', '固有名詞')]).convert_list_object())
Part-of-speech structure
Similar Package
natto-py
LICENSE
For developers
Dev environment
If you're using Pycharm Professional edition, you could set docker-compose.yml as remote
interpreter.
To call python2.7, set /opt/conda/envs/p27/bin/python2.7
To call python3.7, set /opt/conda/envs/p37/bin/python3.7
These commands checks from procedures of package install until test of package.
Test environment
$ docker-compose build
|
[](LICENSE)[](https://travis-ci.org/Kensuke-Mitsuzawa/JapaneseTokenizers)
# What's this?
This is simple python-wrapper for Japanese Tokenizers(A.K.A Tokenizer)
This project aims to call tokenizers and split a sentence into tokens as easy as possible.
And, this project supports various Tokenization tools common interface. Thus, it's easy to compare output from various tokenizers.
This project is available also in [Github](https://github.com/Kensuke-Mitsuzawa/JapaneseTokenizers).
If you find any bugs, please report them to github issues. Or any pull requests are welcomed!
# Requirements
- Python 2.7
- Python 3.x
- checked in 3.5, 3.6, 3.7
# Features
* simple/common interface among various tokenizers
* simple/common interface for filtering with stopwords or Part-of-Speech condition
* simple interface to add user-dictionary(mecab only)
## Supported Tokenizers
### Mecab
[Mecab](http://mecab.googlecode.com/svn/trunk/mecab/doc/index.html?sess=3f6a4f9896295ef2480fa2482de521f6) is open source tokenizer system for various language(if you have dictionary for it)
See [english documentation](https://github.com/jordwest/mecab-docs-en) for detail
### Juman
[Juman](http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN) is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman is strong for ambiguous writing style in Japanese, and is strong for new-comming words thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
### Juman++
[Juman++](http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN++) is a tokenizer system developed by Kurohashi laboratory, Kyoto University, Japan.
Juman++ is succeeding system of Juman. It adopts RNN model for tokenization.
Juman++ is strong for ambigious writing style in Japanese, and is strong for new-comming words thanks to Web based huge dictionary.
And, Juman tells you semantic meaning of words.
Note: New Juman++ dev-version(later than 2.x) is available at [Github](https://github.com/ku-nlp/jumanpp)
### Kytea
[Kytea](http://www.phontron.com/kytea/) is tokenizer tool developped by Graham Neubig.
Kytea has a different algorithm from one of Mecab or Juman.
# Setting up
## Tokenizers auto-install
```
make install
```
### mecab-neologd dictionary auto-install
```
make install_neologd
```
## Tokenizers manual-install
### MeCab
See [here](https://github.com/jordwest/mecab-docs-en) to install MeCab system.
### Mecab Neologd dictionary
Mecab-neologd dictionary is a dictionary-extension based on ipadic-dictionary, which is basic dictionary of Mecab.
With, Mecab-neologd dictionary, you're able to parse new-coming words make one token.
Here, new-coming words is such like, movie actor name or company name.....
See [here](https://github.com/neologd/mecab-ipadic-neologd) and install mecab-neologd dictionary.
### Juman
```
wget -O juman7.0.1.tar.bz2 "http://nlp.ist.i.kyoto-u.ac.jp/DLcounter/lime.cgi?down=http://nlp.ist.i.kyoto-u.ac.jp/nl-resource/juman/juman-7.01.tar.bz2&name=juman-7.01.tar.bz2"
bzip2 -dc juman7.0.1.tar.bz2 | tar xvf -
cd juman-7.01
./configure
make
[sudo] make install
```
## Juman++
* GCC version must be >= 5
```
wget http://lotus.kuee.kyoto-u.ac.jp/nl-resource/jumanpp/jumanpp-1.02.tar.xz
tar xJvf jumanpp-1.02.tar.xz
cd jumanpp-1.02/
./configure
make
[sudo] make install
```
## Kytea
Install Kytea system
```
wget http://www.phontron.com/kytea/download/kytea-0.4.7.tar.gz
tar -xvf kytea-0.4.7.tar
cd kytea-0.4.7
./configure
make
make install
```
Kytea has [python wrapper](https://github.com/chezou/Mykytea-python) thanks to michiaki ariga.
Install Kytea-python wrapper
```
pip install kytea
```
## install
```
[sudo] python setup.py install
```
### Note
During install, you see warning message when it fails to install `pyknp` or `kytea`.
if you see these messages, try to re-install these packages manually.
# Usage
Tokenization Example(For python3.x. To see exmaple code for Python2.x, plaese see [here](https://github.com/Kensuke-Mitsuzawa/JapaneseTokenizers/blob/master/examples/examples.py))
```
import JapaneseTokenizer
input_sentence = '10日放送の「中居正広のミになる図書館」(テレビ朝日系)で、SMAPの中居正広が、篠原信一の過去の勘違いを明かす一幕があった。'
# ipadic is well-maintained dictionary #
mecab_wrapper = JapaneseTokenizer.MecabWrapper(dictType='ipadic')
print(mecab_wrapper.tokenize(input_sentence).convert_list_object())
# neologd is automatically-generated dictionary from huge web-corpus #
mecab_neologd_wrapper = JapaneseTokenizer.MecabWrapper(dictType='neologd')
print(mecab_neologd_wrapper.tokenize(input_sentence).convert_list_object())
```
## Filtering example
```
import JapaneseTokenizer
# with word filtering by stopword & part-of-speech condition #
print(mecab_wrapper.tokenize(input_sentence).filter(stopwords=['テレビ朝日'], pos_condition=[('名詞', '固有名詞')]).convert_list_object())
```
## Part-of-speech structure
Mecab, Juman, Kytea have different system of Part-of-Speech(POS).
You can check tables of Part-of-Speech(POS) [here](http://www.unixuser.org/~euske/doc/postag/)
# Similar Package
## natto-py
natto-py is sophisticated package for tokenization. It supports following features
* easy interface for tokenization
* importing additional dictionary
* partial parsing mode
# LICENSE
MIT license
# For developers
You could build an environment which has dependencies to test this package.
Simply, you build docker image and run docker container.
## Dev environment
Develop environment is defined with `test/docker-compose-dev.yml`.
With the docker-compose.yml file, you could call python2.7 or python3.7
If you're using Pycharm Professional edition, you could set docker-compose.yml as remote interpreter.
To call python2.7, set `/opt/conda/envs/p27/bin/python2.7`
To call python3.7, set `/opt/conda/envs/p37/bin/python3.7`
## Test environment
These commands checks from procedures of package install until test of package.
```bash
$ docker-compose build
$ docker-compose up
```
|
[
"Morphology",
"Responsible & Trustworthy NLP",
"Robustness in NLP",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/tokuhirom/akaza
|
2015-10-14T01:17:00Z
|
Yet another Japanese IME for IBus/Linux
|
akaza-im / akaza
Public
Branches
Tags
Go to file
Go to file
Code
.github…
.run
akaza-…
akaza-…
akaza-…
develo…
docs
ibus-ak…
ibus-sys
keymap
libakaza
marisa…
romkan
.gitattri…
.gitignore
.gitmo…
CONT…
Cargo.l…
Cargo.…
Chang…
LICEN…
About
Yet another Japanese IME for
IBus/Linux
# nlp # rust # ime # ibus
Readme
MIT license
Activity
Custom properties
216 stars
7 watching
7 forks
Report repository
Releases 4
v0.2.0
Latest
on Jan 29, 2023
+ 3 releases
Packages
No packages published
Contributors
5
Languages
Rust 95.6%
Perl 1.8%
C 1.1%
Other 1.5%
Code
Issues
18
Pull requests
3
Discussions
Actions
Wiki
Security
Makefile
Note.md
READ…
deny.to…
renova…
Yet another kana-kanji-converter on IBus, written in Rust.
統計的かな漢字変換による日本語IMEです。 Rust で書いていま
す。
現在、開発途中のプロダクトです。非互換の変更が予告なくはい
ります
いじりやすくて ある程度 UIが使いやすいかな漢字変換があったら
面白いなと思ったので作ってみています。 「いじりやすくて」と
いうのはつまり、Hack-able であるという意味です。
モデルデータを自分で生成できて、特定の企業に依存しない自由
なかな漢字変換エンジンを作りたい。
UI/Logic をすべて Rust で書いてあるので、拡張が容易です。
統計的かな漢字変換モデルを採用しています
言語モデルの生成元は日本語 Wikipedia と青空文庫で
す。
形態素解析器 Vibrato で分析した結果をもとに
2gram 言語モデルを構築しています。
利用者の環境で 1 から言語モデルを再生成すること
が可能です。
ユーザー環境で、利用者の変換結果を学習します(unigram,
bigramの頻度を学習します)
ibus-akaza
モチベーション
特徴
README
MIT license
ibus 1.5+
marisa-trie
gtk4
rust
Linux 6.0 以上
ibus 1.5 以上
リトルエンディアン環境
モデルファイルをダウンロードして展開してください。
ibus-akaza をインストールしてください。
Akaza は典型的には以下の順番で探します。
1. ~/.local/share/akaza/keymap/{KEYMAP_NAME}.yml
2. /usr/local/share/akaza/keymap/{KEYMAP_NAME}.yml
3. /usr/share/akaza/keymap/{KEYMAP_NAME}.yml
Dependencies
Runtime dependencies
Build time dependencies
Supported environment
Install 方法
sudo mkdir -p /usr/share/akaza/model/default/
curl -L https://github.com/akaza-im/akaza-
default-
model/releases/download/<<VERSION>>/akaza-
default-model.tar.gz | sudo tar xzv --strip-
components=1 -C /usr/share/akaza/model/default/
rustup install stable
make
sudo make install
ibus restart
ibus engine akaza
設定方法
Keymap の設定
このパスは、XDG ユーザーディレクトリ の仕様に基づいていま
す。 Akaza は Keymap は XDG_DATA_HOME と XDG_DATA_DIRS か
らさがします。 XDG_DATA_HOME は設定していなければ
~/.local/share/ です。XDGA_DATA_DIR は設定していなければ
/usr/local/share:/usr/share/ です。
ローマ字かなマップも同様のパスからさがします。
1. ~/.local/share/akaza/romkan/{KEYMAP_NAME}.yml
2. /usr/local/share/akaza/romkan/{KEYMAP_NAME}.yml
3. /usr/share/akaza/romkan/{KEYMAP_NAME}.yml
model は複数のファイルからなります。
unigram.model
bigram.model
SKK-JISYO.akaza
この切り替えは以下のようなところから読まれます。
~/.local/share/akaza/model/{MODEL_NAME}/unigram.model
~/.local/share/akaza/model/{MODEL_NAME}/bigram.model
~/.local/share/akaza/model/{MODEL_NAME}/SKK-
JISYO.akaza
keymap, romkan と同様に、XDG_DATA_DIRS から読むこともでき
ます。
流行り言葉が入力できない場合、jawiki-kana-kanji-dict の利用を検
討してください。 Wikipedia から自動的に抽出されたデータを元
に SKK 辞書を作成しています。 Github Actions で自動的に実行さ
れているため、常に新鮮です。
一方で、自動抽出しているために変なワードも入っています。変
なワードが登録されていることに気づいたら、github issues で報
告してください。
RomKan の設定
model の設定
FAQ
最近の言葉が変換できません/固有名詞が変換できま
せん
人名が入力できません。など。
必要な SKK の辞書を読み込んでください。 現時点では config.yml
を手で編集する必要があります。
https://skk-dev.github.io/dict/
ibus-uniemoji を参考に初期の実装を行いました。
日本語入力を支える技術 を読み込んで実装しました。この本
が
実装
THANKS TO
|
# ibus-akaza
Yet another kana-kanji-converter on IBus, written in Rust.
統計的かな漢字変換による日本語IMEです。
Rust で書いています。
**現在、開発途中のプロダクトです。非互換の変更が予告なくはいります**
## モチベーション
いじりやすくて **ある程度** UIが使いやすいかな漢字変換があったら面白いなと思ったので作ってみています。
「いじりやすくて」というのはつまり、Hack-able であるという意味です。
モデルデータを自分で生成できて、特定の企業に依存しない自由なかな漢字変換エンジンを作りたい。
## 特徴
* UI/Logic をすべて Rust で書いてあるので、拡張が容易です。
* 統計的かな漢字変換モデルを採用しています
* 言語モデルの生成元は日本語 Wikipedia と青空文庫です。
* 形態素解析器 Vibrato で分析した結果をもとに 2gram 言語モデルを構築しています。
* 利用者の環境で 1 から言語モデルを再生成することが可能です。
* ユーザー環境で、利用者の変換結果を学習します(unigram, bigramの頻度を学習します)
## Dependencies
### Runtime dependencies
* ibus 1.5+
* marisa-trie
* gtk4
### Build time dependencies
* rust
### Supported environment
* Linux 6.0 以上
* ibus 1.5 以上
* リトルエンディアン環境
## Install 方法
モデルファイルをダウンロードして展開してください。
sudo mkdir -p /usr/share/akaza/model/default/
curl -L https://github.com/akaza-im/akaza-default-model/releases/download/<<VERSION>>/akaza-default-model.tar.gz | sudo tar xzv --strip-components=1 -C /usr/share/akaza/model/default/
ibus-akaza をインストールしてください。
rustup install stable
make
sudo make install
ibus restart
ibus engine akaza
## 設定方法
### Keymap の設定
Akaza は典型的には以下の順番で探します。
1. `~/.local/share/akaza/keymap/{KEYMAP_NAME}.yml`
2. `/usr/local/share/akaza/keymap/{KEYMAP_NAME}.yml`
3. `/usr/share/akaza/keymap/{KEYMAP_NAME}.yml`
このパスは、[XDG ユーザーディレクトリ](https://wiki.archlinux.jp/index.php/XDG_%E3%83%A6%E3%83%BC%E3%82%B6%E3%83%BC%E3%83%87%E3%82%A3%E3%83%AC%E3%82%AF%E3%83%88%E3%83%AA)
の仕様に基づいています。
Akaza は Keymap は `XDG_DATA_HOME` と `XDG_DATA_DIRS` からさがします。
`XDG_DATA_HOME` は設定していなければ `~/.local/share/` です。`XDGA_DATA_DIR` は設定していなければ `/usr/local/share:/usr/share/` です。
### RomKan の設定
ローマ字かなマップも同様のパスからさがします。
1. `~/.local/share/akaza/romkan/{KEYMAP_NAME}.yml`
2. `/usr/local/share/akaza/romkan/{KEYMAP_NAME}.yml`
3. `/usr/share/akaza/romkan/{KEYMAP_NAME}.yml`
### model の設定
model は複数のファイルからなります。
- unigram.model
- bigram.model
- SKK-JISYO.akaza
この切り替えは以下のようなところから読まれます。
- `~/.local/share/akaza/model/{MODEL_NAME}/unigram.model`
- `~/.local/share/akaza/model/{MODEL_NAME}/bigram.model`
- `~/.local/share/akaza/model/{MODEL_NAME}/SKK-JISYO.akaza`
keymap, romkan と同様に、`XDG_DATA_DIRS` から読むこともできます。
## FAQ
### 最近の言葉が変換できません/固有名詞が変換できません
流行り言葉が入力できない場合、[jawiki-kana-kanji-dict](https://github.com/tokuhirom/jawiki-kana-kanji-dict) の利用を検討してください。
Wikipedia から自動的に抽出されたデータを元に SKK 辞書を作成しています。
Github Actions で自動的に実行されているため、常に新鮮です。
一方で、自動抽出しているために変なワードも入っています。変なワードが登録されていることに気づいたら、github issues で報告してください。
### 人名が入力できません。など。
必要な SKK の辞書を読み込んでください。
現時点では config.yml を手で編集する必要があります。
https://skk-dev.github.io/dict/
## THANKS TO
* [ibus-uniemoji](https://github.com/salty-horse/ibus-uniemoji) を参考に初期の実装を行いました。
* [日本語入力を支える技術](https://gihyo.jp/book/2012/978-4-7741-4993-6) を読み込んで実装しました。この本がなかったら実装しようと思わなかったと思います。
|
[
"Language Models",
"Syntactic Text Processing"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/hexenq/kuroshiro
|
2016-01-03T09:16:40Z
|
Japanese language library for converting Japanese sentence to Hiragana, Katakana or Romaji with furigana and okurigana modes supported.
|
hexenq / kuroshiro
Public
7 Branches
18 Tags
Go to file
Go to file
Code
src
test
.babelrc
.eslintrc
.gitignore
.npmignore
.travis.yml
CHANGELOG.md
CONTRIBUTING.md
LICENSE
README.eo-eo.md
README.jp.md
README.md
README.zh-cn.md
README.zh-tw.md
package.json
Build Status coverage
coverage
96%
96%
gitter
gitter
join chat
join chat license
license MIT
MIT
kuroshiro is a Japanese language library for converting Japanese sentence to Hiragana, Katakana or Romaji with furigana and okurigana
modes supported.
Read this in other languages: English, 日本語, 简体中文, 繁體中文, Esperanto.
You can check the demo here.
Japanese Sentence => Hiragana, Katakana or Romaji
Furigana and okurigana supported
About
Japanese language library for
converting Japanese sentence to
Hiragana, Katakana or Romaji with
furigana and okurigana modes
supported.
kuroshiro.org
# katakana # hiragana # japanese # romaji
# kana # kanji # hepburn # mecab # furigana
# kuromoji # okurigana
Readme
MIT license
Activity
836 stars
16 watching
94 forks
Report repository
Releases 2
1.2.0
Latest
on Jun 8, 2021
+ 1 release
Used by 486
+ 478
Contributors
8
Languages
JavaScript 100.0%
Code
Issues
42
Pull requests
6
Actions
Projects
Security
Insights
kuroshiro
Demo
Feature
README
MIT license
🆕Multiple morphological analyzers supported
🆕Multiple romanization systems supported
Useful Japanese utils
Seperate morphological analyzer from phonetic notation logic to make it possible that we can use different morphological analyzers
(ready-made or customized)
Embrace ES8/ES2017 to use async/await functions
Use ES6 Module instead of CommonJS
You should check the environment compatibility of each analyzer before you start working with them
Analyzer
Node.js Support
Browser Support
Plugin Repo
Developer
Kuromoji
✓
✓
kuroshiro-analyzer-kuromoji
Hexen Qi
Mecab
✓
✗
kuroshiro-analyzer-mecab
Hexen Qi
Yahoo Web API
✓
✗
kuroshiro-analyzer-yahoo-webapi
Hexen Qi
Install with npm package manager:
Load the library:
Support ES6 Module import
And CommonJS require
Add dist/kuroshiro.min.js to your frontend project (you may first build it from source with npm run build after npm install ), and in
your HTML:
Breaking Change in 1.x
Ready-made Analyzer Plugins
Usage
Node.js (or using a module bundler (e.g. Webpack))
$ npm install kuroshiro
import Kuroshiro from "kuroshiro";
// Initialize kuroshiro with an instance of analyzer (You could check the [apidoc](#initanalyzer) for more informatio
// For this example, you should npm install and import the kuromoji analyzer first
import KuromojiAnalyzer from "kuroshiro-analyzer-kuromoji";
// Instantiate
const kuroshiro = new Kuroshiro();
// Initialize
// Here uses async/await, you could also use Promise
await kuroshiro.init(new KuromojiAnalyzer());
// Convert what you want
const result = await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" }
const Kuroshiro = require("kuroshiro");
const KuromojiAnalyzer = require("kuroshiro-analyzer-kuromoji");
const kuroshiro = new Kuroshiro();
kuroshiro.init(new KuromojiAnalyzer())
.then(function(){
return kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
})
.then(function(result){
console.log(result);
})
Browser
For this example, you should also include kuroshiro-analyzer-kuromoji.min.js which you could get from kuroshiro-analyzer-kuromoji
Instantiate:
Initialize kuroshiro with an instance of analyzer, then convert what you want:
Examples
Initialize kuroshiro with an instance of analyzer. You should first import an analyzer and initialize it. You can make use of the Ready-made
Analyzers listed above. And please refer to documentation of analyzers for analyzer initialization instructions
Arguments
analyzer - An instance of analyzer.
Examples
Convert given string to target syllabary with options available
Arguments
str - A String to be converted.
options - Optional kuroshiro has several convert options as below.
Options
Type
Default
Description
to
String
"hiragana"
Target syllabary [ hiragana , katakana , romaji ]
mode
String
"normal"
Convert mode [ normal , spaced , okurigana , furigana ]
romajiSystem
String
"hepburn"
Romanization system [ nippon , passport , hepburn ]
delimiter_start
String
"("
Delimiter(Start)
delimiter_end
String
")"
Delimiter(End)
*: Param romajiSystem is only applied when the value of param to is romaji . For more about it, check Romanization System
<script src="url/to/kuroshiro.min.js"></script>
<script src="url/to/kuroshiro-analyzer-kuromoji.min.js"></script>
var kuroshiro = new Kuroshiro();
kuroshiro.init(new KuromojiAnalyzer({ dictPath: "url/to/dictFiles" }))
.then(function () {
return kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
})
.then(function(result){
console.log(result);
})
API
Constructor
const kuroshiro = new Kuroshiro();
Instance Medthods
init(analyzer)
await kuroshiro.init(new KuromojiAnalyzer());
convert(str, [options])
*
Examples
// furigana
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"furigana", to:"hiragana"});
// result: 感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!
かん
と
て
つな
かさ
じんせい
さいこう
Examples
Check if input char is hiragana.
Check if input char is katakana.
Check if input char is kana.
Check if input char is kanji.
Check if input char is Japanese.
Check if input string has hiragana.
Check if input string has katakana.
Check if input string has kana.
Check if input string has kanji.
Check if input string has Japanese.
Convert input kana string to hiragana.
// normal
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"}
// result:かんじとれたらてをつなごう、かさなるのはじんせいのライン and レミリアさいこう!
// spaced
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"}
// result:かんじとれ たら て を つなご う 、 かさなる の は じんせい の ライン and レミ リア さいこう !
// okurigana
await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"}
// result: 感(かん)じ取(と)れたら手(て)を繋(つな)ごう、重(かさ)なるのは人生(じんせい)のライン and レミリア最高(さいこう)!
Utils
const result = Kuroshiro.Util.isHiragana("あ"));
isHiragana(char)
isKatakana(char)
isKana(char)
isKanji(char)
isJapanese(char)
hasHiragana(str)
hasKatakana(str)
hasKana(str)
hasKanji(str)
hasJapanese(str)
kanaToHiragna(str)
Convert input kana string to katakana.
Convert input kana string to romaji. Param system accepts "nippon" , "passport" , "hepburn" (Default: "hepburn").
kuroshiro supports three kinds of romanization systems.
nippon : Nippon-shiki romanization. Refer to ISO 3602 Strict.
passport : Passport-shiki romanization. Refer to Japanese romanization table published by Ministry of Foreign Affairs of Japan.
hepburn : Hepburn romanization. Refer to BS 4812 : 1972.
There is a useful webpage for you to check the difference between these romanization systems.
Since it's impossible to fully automatically convert furigana directly to romaji because furigana lacks information on pronunciation (Refer to な
ぜ フリガナでは ダメなのか?).
kuroshiro will not handle chōon when processing directly furigana (kana) -> romaji conversion with every romanization system (Except that
Chōonpu will be handled)
For example, you'll get "kousi", "koushi", "koushi" respectively when converts kana "こうし" to romaji using nippon , passport , hepburn
romanization system.
The kanji -> romaji conversion with/without furigana mode is unaffected by this logic.
Please check CONTRIBUTING.
kuromoji
wanakana
MIT
kanaToKatakana(str)
kanaToRomaji(str, system)
Romanization System
Notice for Romaji Conversion
Contributing
Inspired By
License
|

# kuroshiro
[](https://travis-ci.org/hexenq/kuroshiro)
[](https://coveralls.io/r/hexenq/kuroshiro)
[](http://badge.fury.io/js/kuroshiro)
[](https://gitter.im/hexenq/kuroshiro)
[](LICENSE)
kuroshiroは日本語文をローマ字や仮名なとに変換できるライブラリです。フリガナ・送り仮名の機能も搭載します。
*ほかの言語:[English](README.md), [日本語](README.jp.md), [简体中文](README.zh-cn.md), [繁體中文](README.zh-tw.md), [Esperanto](README.eo-eo.md)。*
## デモ
オンラインデモは[こちら](https://kuroshiro.org/#demo)です。
## 特徴
- 日本語文 => ひらがな、カタカナ、ローマ字
- フリガナ、送り仮名サポート
- 🆕複数の形態素解析器をサポート
- 🆕複数のローマ字表記法をサポート
- 実用ツール付き
## バッジョン1.xでの重大な変更
- 形態素解析器がルビロジックから分離される。それゆえ、様々な形態素解析器([レディーメイド](#形態素解析器プラグイン)も[カスタマイズ](CONTRIBUTING.md#how-to-submit-new-analyzer-plugins)も)を利用できることになります。
- ES2017の新機能「async/await」を利用します
- CommonJSからES Modulesへ移行します
## 形態素解析器プラグイン
*始まる前にプラグインの適合性をチェックしてください*
| 解析器 | Node.js サポート | ブラウザ サポート | レポジトリ | 開発者 |
|---|---|---|---|---|
|Kuromoji|✓|✓|[kuroshiro-analyzer-kuromoji](https://github.com/hexenq/kuroshiro-analyzer-kuromoji)|[Hexen Qi](https://github.com/hexenq)|
|Mecab|✓|✗|[kuroshiro-analyzer-mecab](https://github.com/hexenq/kuroshiro-analyzer-mecab)|[Hexen Qi](https://github.com/hexenq)|
|Yahoo Web API|✓|✗|[kuroshiro-analyzer-yahoo-webapi](https://github.com/hexenq/kuroshiro-analyzer-yahoo-webapi)|[Hexen Qi](https://github.com/hexenq)|
## 使い方
### Node.js (又はWebpackなどのモジュールバンドラを使ってる時)
npmでインストール:
```sh
$ npm install kuroshiro
```
kuroshiroをロードします:
*ES6 Module `import` と CommonJS `require`、どちらでもOK*
```js
import Kuroshiro from "kuroshiro";
```
インスタンス化します:
```js
const kuroshiro = new Kuroshiro();
```
形態素解析器のインスタンスを引数にしてkuroshiroを初期化する ([API説明](#initanalyzer)を参考にしてください):
```js
// この例では,まずnpm installとimportを通じてkuromojiの形態素解析器を導入します
import KuromojiAnalyzer from "kuroshiro-analyzer-kuromoji";
// ...
// 初期化
// ここでasync/awaitを使ってますが, Promiseも使えます
await kuroshiro.init(new KuromojiAnalyzer());
```
変換の実行:
```js
const result = await kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
```
### ブラウザ
`dist/kuroshiro.min.js`を導入し (その前に`npm install`と`npm run build`を通じて`kuroshiro.min.js`を生成します)、そしてHTMLに:
```html
<script src="url/to/kuroshiro.min.js"></script>
```
この例では`kuroshiro-analyzer-kuromoji.min.js`の導入は必要です。詳しくは[kuroshiro-analyzer-kuromoji](https://github.com/hexenq/kuroshiro-analyzer-kuromoji)を参考にしてください
```html
<script src="url/to/kuroshiro-analyzer-kuromoji.min.js"></script>
```
インスタンス化します:
```js
var kuroshiro = new Kuroshiro();
```
形態素解析器のインスタンスを引数にしてkuroshiroを初期化するから,変換を実行します:
```js
kuroshiro.init(new KuromojiAnalyzer({ dictPath: "url/to/dictFiles" }))
.then(function () {
return kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", { to: "hiragana" });
})
.then(function(result){
console.log(result);
})
```
## APIの説明
### コンストラクタ
__例__
```js
const kuroshiro = new Kuroshiro();
```
### インスタンス関数
#### init(analyzer)
形態素解析器のインスタンスを引数にしてkuroshiroを初期化する。先に形態素解析器の導入と初期化は必要です。前述の[形態素解析器プラグイン](#形態素解析器プラグイン)を利用できます。形態素解析器の初期化方法は各自のドキュメントを参照してください。
__引数__
* `analyzer` - 形態素解析器のインスタンス。
__例__
```js
await kuroshiro.init(new KuromojiAnalyzer());
```
#### convert(str, [options])
文字列を目標音節文字に変換します(変換モードが設置できます)。
__引数__
* `str` - 変換される文字列。
* `options` - *任意* 変換のパラメータ。下表の通り。
| オプション | タイプ | デフォルト値 | 説明 |
|---|---|---|---|
| to | String | 'hiragana' | 目標音節文字<br />`hiragana` (ひらがな),<br />`katakana` (カタカナ),<br />`romaji` (ローマ字) |
| mode | String | 'normal' | 変換モード<br />`normal` (一般),<br />`spaced` (スペースで組み分け),<br />`okurigana` (送り仮名),<br />`furigana` (フリガナ) |
| romajiSystem<sup>*</sup> | String | "hepburn" | ローマ字<br />`nippon` (日本式),<br />`passport` (パスポート式),<br />`hepburn` (ヘボン式) |
| delimiter_start | String | '(' | 区切り文字 (始め) |
| delimiter_end | String | ')' | 区切り文字 (終り) |
**: 引数`romajiSystem`は引数`to`が`romaji`に設定されてる場合にのみ有効です。詳細については, [ローマ字表記法](#ローマ字表記法)を参考にしてください。*
__例__
```js
// normal (一般)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"});
// 結果:かんじとれたらてをつなごう、かさなるのはじんせいのライン and レミリアさいこう!
```
```js
// spaced (スペースで組み分け)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"});
// 結果:かんじとれ たら て を つなご う 、 かさなる の は じんせい の ライン and レミ リア さいこう !
```
```js
// okurigana (送り仮名)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"okurigana", to:"hiragana"});
// 結果: 感(かん)じ取(と)れたら手(て)を繋(つな)ごう、重(かさ)なるのは人生(じんせい)のライン and レミリア最高(さいこう)!
```
<pre>
// furigana (フリガナ)
kuroshiro.convert("感じ取れたら手を繋ごう、重なるのは人生のライン and レミリア最高!", {mode:"furigana", to:"hiragana"});
// 結果: <ruby>感<rp>(</rp><rt>かん</rt><rp>)</rp></ruby>じ<ruby>取<rp>(</rp><rt>と</rt><rp>)</rp></ruby>れたら<ruby>手<rp>(</rp><rt>て</rt><rp>)</rp></ruby>を<ruby>繋<rp>(</rp><rt>つな</rt><rp>)</rp></ruby>ごう、<ruby>重<rp>(</rp><rt>かさ</rt><rp>)</rp></ruby>なるのは<ruby>人生<rp>(</rp><rt>じんせい</rt><rp>)</rp></ruby>のライン and レミリア<ruby>最高<rp>(</rp><rt>さいこう</rt><rp>)</rp></ruby>!
</pre>
### 実用ツール
__例__
```js
const result = Kuroshiro.Util.isHiragana("あ"));
```
#### isHiragana(char)
入力文字はひらがなかどうかを判断します。
#### isKatakana(char)
入力文字はカタカナかどうかを判断します。
#### isKana(char)
入力文字は仮名かどうかを判断します。
#### isKanji(char)
入力文字は漢字かどうかを判断します。
#### isJapanese(char)
入力文字は日本語かどうかを判断します。
#### hasHiragana(str)
入力文字列にひらがながあるかどうかを確認する。
#### hasKatakana(str)
入力文字列にカタカナがあるかどうかを確認する。
#### hasKana(str)
入力文字列に仮名があるかどうかを確認する。
#### hasKanji(str)
入力文字列に漢字があるかどうかを確認する。
#### hasJapanese(str)
入力文字列に日本語があるかどうかを確認する。
#### kanaToHiragna(str)
入力仮名文字列をひらがなへ変換します。
#### kanaToKatakana(str)
入力仮名文字列をカタカナへ変換します。
#### kanaToRomaji(str, system)
入力仮名文字列をローマ字へ変換します。引数`system`の指定可能値は`"nippon"`, `"passport"`, `"hepburn"` (デフォルト値: "hepburn")
## ローマ字表記法
kuroshiroは三種類のローマ字表記法をサポートします。
`nippon`: 日本式ローマ字。[ISO 3602 Strict](http://www.age.ne.jp/x/nrs/iso3602/iso3602.html) を参照。
`passport`: パスポート式ローマ字。 日本外務省が発表した [ヘボン式ローマ字綴方表](https://www.ezairyu.mofa.go.jp/passport/hebon.html) を参照。
`hepburn`: ヘボン式ローマ字。[BS 4812 : 1972](https://archive.is/PiJ4) を参照。
各種ローマ字表の比較は[こちら](http://jgrammar.life.coocan.jp/ja/data/rohmaji2.htm)を参考にしてください。
### ローマ字変換のお知らせ
フリガナは音声を正確にあらわしていないため、__フリガナ__ を __ローマ字__ に完全自動的に変換することは不可能です。([なぜフリガナではダメなのか?](https://green.adam.ne.jp/roomazi/onamae.html#naze)を参照)
そのゆえ、任意のローマ字表記法を使って、フリガナ(仮名)-> ローマ字 変換を行うとき、kuroshiroは長音の処理を実行しません。(長音符は処理されます)
*例えば`nippon`、` passport`、 `hepburn`のローマ字表記法を使って フリガナ->ローマ字 変換を行うと、それぞれ"kousi"、 "koushi"、 "koushi"が得られます。*
フリガナモードを使うかどうかにかかわらず、漢字->ローマ字の変換はこの仕組みに影響を与えられないです。
## 貢献したい方
[CONTRIBUTING](CONTRIBUTING.md) を参考にしてみてください。
## 感謝
- kuromoji
- wanakana
## ライセンス
MIT
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/scriptin/kanji-frequency
|
2016-01-24T01:51:10Z
|
Kanji usage frequency data collected from various sources
|
scriptin / kanji-frequency
Public
Branches
Tags
Go to file
Go to file
Code
.github…
.vscode
data
data2015
public
scripts
src
.editor…
.gitignore
.prettie…
.prettie…
CONT…
LICEN…
READ…
astro.c…
packag…
packag…
tailwin…
tsconfi…
About
Kanji usage frequency data
collected from various sources
scriptin.github.io/kanji-frequency/
# data # japanese # corpus
# data-visualization # cjk # kanji
# japanese-language # corpus-linguistics
# frequency-lists # cjk-characters
# kanji-frequency
Readme
CC-BY-4.0 license
Activity
131 stars
5 watching
19 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Astro 46.6%
JavaScript 40.9%
MDX 9.9%
TypeScript 2.6%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
README
CC-BY-4.0 license
Datasets built from various Japanese language corpora
https://scriptin.github.io/kanji-frequency/ - see this
website for the dataset description. This readme
describes only technical aspects.
You can download the datasets here:
https://github.com/scriptin/kanji-
frequency/tree/master/data
You'll need Node.js 18 or later.
See scripts section in package.json.
Aozora:
aozora:download - use crawler/scraper to collect
the data
aozora:gaiji:extract - extract gaiji notations
data from scraped pages. Gaiji refers to kanji
charasters which are replaced with images in the
documents, because Shift-JIS encoding cannot
represent them
aozora:gaiji:replacements - build gaiji
replacements file - produces only partial results,
which may need to be manually completed
aozora:clean - clean the scraped pages (apply
gaiji replacements)
aozora:count - create the dataset
Wikipedia:
wikipedia:fetch - fetch random pages using
MediaWiki API
wikipedia:count - create the dataset
News:
news:wikinews:fetch - fetch random pages from
Wikinews using MediaWiki API
news:count - create the dataset
news:dates - create additional file with dates of
articles
Kanji usage frequency
Building the datasets
See Astro docs and the scripts section in
package.json.
Building the website
|
# Kanji usage frequency
Datasets built from various Japanese language corpora
<https://scriptin.github.io/kanji-frequency/> - see this website for the dataset description. This readme describes only technical aspects.
You can download the datasets here: <https://github.com/scriptin/kanji-frequency/tree/master/data>
## Building the datasets
You'll need Node.js 18 or later.
See `scripts` section in [package.json](./package.json).
Aozora:
- `aozora:download` - use crawler/scraper to collect the data
- `aozora:gaiji:extract` - extract gaiji notations data from scraped pages. Gaiji refers to kanji charasters which are replaced with images in the documents, because Shift-JIS encoding cannot represent them
- `aozora:gaiji:replacements` - build gaiji replacements file - produces only partial results, which may need to be manually completed
- `aozora:clean` - clean the scraped pages (apply gaiji replacements)
- `aozora:count` - create the dataset
Wikipedia:
- `wikipedia:fetch` - fetch random pages using MediaWiki API
- `wikipedia:count` - create the dataset
News:
- `news:wikinews:fetch` - fetch random pages from Wikinews using MediaWiki API
- `news:count` - create the dataset
- `news:dates` - create additional file with dates of articles
## Building the website
See [Astro](https://astro.build/) [docs](https://docs.astro.build/en/getting-started/) and the `scripts` section in [package.json](./package.json).
|
[
"Information Extraction & Text Mining",
"Structured Data in NLP",
"Term Extraction"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/scriptin/jmdict-simplified
|
2016-02-07T16:34:32Z
|
JMdict and JMnedict in JSON format
|
scriptin / jmdict-simplified
Public
Branches
Tags
Go to file
Go to file
Code
.github…
checks…
docs
gradle/…
node
src/mai…
.editor…
.gitignore
CHAN…
CONT…
LICEN…
READ…
build.g…
gradle.…
gradlew
gradle…
setting…
About
JMdict, JMnedict, Kanjidic,
KRADFILE/RADKFILE in JSON
format
# language # json # xml # dictionary
# japanese # japanese-language
# dictionary-tools # jmdict # kanjidic2
# jmnedict # radkfile # kradfile # kanjidic
Readme
CC-BY-SA-4.0 license
Activity
193 stars
8 watching
13 forks
Report repository
Releases 121
3.6.1+20241104122244
Latest
3 days ago
+ 120 releases
Contributors
4
Languages
Kotlin 77.7%
TypeScript 18.1%
JavaScript 4.2%
Code
Issues
Pull requests
Actions
Security
Insights
jmdict-simplified
README
CC-BY-SA-4.0 license
JMdict, JMnedict, Kanjidic, and Kradfile/Radkfile in
JSON format
with more comprehensible structure and beginner-
friendly documentation
DOWNLOAD
J S O N F I L E S
READ
FORMAT DOCS
NPM @scriptin/jmdict-simplified-types
NPM @scriptin/jmdict-simplified-loader
Releases are automatically scheduled for every
Monday. See release.yml
Found a bug? Need a new feature? See
CONTRIBUTING.md
Original XML files are less than ideal in terms of format.
(My opinion only, the JMdict/JMnedict project in general
is absolutely awesome!) This project provides the
following changes and improvements:
1. JSON instead of XML (or custom text format of
RADKFILE/KRADFILE). Because the original
format used some "advanced" XML features, such
as entities and DOCTYPE, it could be quite difficult
to use in some tech stacks, e.g. when your
programming language of choice has no libraries
for parsing some syntax
2. Regular structure for every item in every collection,
no "same as in previous" implicit values. This is a
problem with original XML files because users' code
has to keep track of various parts of state while
traversing collections. In this project, I tried to make
every item of every collection "self-contained," with
all the fields having all the values, without a need to
refer to preceding items
3. Avoiding null (with few exceptions) and missing
fields, preferring empty arrays. See
http://thecodelesscode.com/case/6 for the
inspiration for this
4. Human-readable names for fields instead of cryptic
abbreviations with no explanations
Why?
5. Documentation in a single file instead of browsing
obscure pages across multiple sites. In my opinion,
the documentation is the weakest part of
JMDict/JMnedict project
See the Format documentation or TypeScript types
Please also read the original documentation if you have
more questions:
EDRDG wiki
JMdict (also wiki)
JMnedict
Kanjidic
RADKFILE/KRADFILE
There are also Kotlin types, although they contain some
methods and annotations you might not need.
JMdictJsonElement.kt
JMnedictJsonElement.kt
Kanjidic2JsonElement.kt
There are three main types of JSON files for the JMdict
dictionary:
full - same as original files, with no omissions of
entries
"common-only" - containing only dictionary entries
considered "common" - if any of /k_ele/ke_pri or
/r_ele/re_pri elements in XML files contain one
of these markers: "news1", "ichi1", "spec1",
"spec2", "gai1". Only one such element is enough
for the whole word to be considered common. This
corresponds to how online dictionaries such as
https://jisho.org classify words as "common".
Common-only distributions are much smaller. They
are marked with "common" keyword in file names,
see the latest release
Format
Full, "common-only", with
examples, and language-specific
versions
with example sentences (built from
JMdict_e_examp.xml source file) - English-only
version with example sentences from Tanaka
corpus maintained by https://tatoeba.org. This
version doesn't have a full support in this project:
NPM libraries do not provide parsers and type
definitions
Also, JMdict and Kanjidic have language-specific
versions with language codes (3-letter ISO 639-2 codes
for JMdict, 2-letter ISO 639-1 codes for Kanjidic) in file
names:
all - all languages, i.e. no language filter was
applied
eng / en - English
ger / de - German
rus / ru - Russian
hun / hu - Hungarian
dut / nl - Dutch
spa / es - Spanish
fre / fr - French
swe / sv - Swedish
slv / sl - Slovenian
JMnedict and JMdict with examples have only one
respective version each, since they are both English-
only, and JMnedict has no "common" indicators on
entries.
Java 17 (JRE only, JDK is not necessary) - you can
use Azul Zulu OpenJDK
You don't need to install Gradle, just use the Gradle
wrapper provided in this repository: gradlew (for
Linux/Mac) or gradlew.bat (for Windows)
NOTE: You can grab the pre-built JSON files in the
latest release
Requirements for running the
conversion script
Converting XML dictionaries
Use included scripts: gradlew (for Linux/macOS) or
gradlew.bat (for Windows).
Tasks to convert dictionary files and create distribution
archives:
./gradlew clean - clean all build artifacts to start
a fresh build, in cases when you need to re-
download and convert from scratch
./gradlew download - download and extract
original dictionary XML files into build/dict-xml
./gradlew convert - convert all dictionaries to
JSON and place into build/dict-json
./gradlew archive - create distribution archives
(zip, tar+gzip) in build/distributions
Utility tasks (for CI/CD workflows):
./gradlew --quiet jmdictHasChanged ,
./gradlew --quiet jmnedictHasChanged , and
./gradlew --quiet kanjidicHasChanged - check if
dictionary files have changed by comparing
checksums of downloaded files with those stored in
the checksums. Outputs YES or NO . Run this only
after download task! The --quiet is to silence
Gradle logs, e.g. when you need to put values into
environments variables.
./gradlew updateChecksums - update checksum
files in the checksums directory. Run after creating
distribution archives and commit checksum files into
the repository, so that next time CI/CD workflow
knows if it needs to rebuild anything.
./gradlew uberJar - create an Uber JAR for
standalone use (i.e. w/o Gradle). The JAR program
shows help messages and should be intuitive to
use if you know how to run it.
For the full list of available tasks, run ./gradlew tasks
Make sure to run tasks in order: download ->
convert -> archive
If running Gradle fails, make sure java is available
on your $PATH environment variable
Troubleshooting
Run Gradle with --stacktrace , --info , or --
debug arguments to see more details if you get an
error
The original XML files - JMdict.xml, JMdict_e.xml,
JMdict_e_examp.xml,and JMnedict.xml - are the
property of the Electronic Dictionary Research and
Development Group, and are used in conformance with
the Group's license. Project started in 1991 by Jim
Breen.
All derived files are distributed under the same license,
as the original license requires it.
The original kanjidic2.xml file is released under
Creative Commons Attribution-ShareAlike License v4.0.
See the Copyright and Permissions section on the
Kanjidic wiki for details.
All derived files are distributed under the same license,
as the original license requires it.
The RADKFILE and KRADFILE files are copyright and
available under the EDRDG Licence. The copyright of
the RADKFILE2 and KRADFILE2 files is held by Jim
Rose.
NPM packages @scriptin/jmdict-simplified-types
and @scriptin/jmdict-simplified-loader are
available under MIT license.
License
JMdict and JMnedict
Kanjidic
RADKFILE/KRADFILE
NPM packages
h
fil
|
# jmdict-simplified
**[JMdict][], [JMnedict][], [Kanjidic][], and [Kradfile/Radkfile][Kradfile] in JSON format**<br>
with more comprehensible structure and beginner-friendly documentation
[][latest-release]
[][format]
[][npm-types]<br>
[][npm-loader]
---
- Releases are automatically scheduled for every Monday. See [release.yml](.github/workflows/release.yml)
- Found a bug? Need a new feature? See [CONTRIBUTING.md](CONTRIBUTING.md)
## Why?
Original XML files are less than ideal in terms of format.
(My opinion only, the JMdict/JMnedict project in general is absolutely awesome!)
This project provides the following changes and improvements:
1. JSON instead of XML (or custom text format of RADKFILE/KRADFILE).
Because the original format used some "advanced" XML features,
such as entities and DOCTYPE, it could be quite difficult to use in some tech stacks,
e.g. when your programming language of choice has no libraries for parsing some syntax
2. Regular structure for every item in every collection, no "same as in previous" implicit values.
This is a problem with original XML files because users' code has to keep track
of various parts of state while traversing collections. In this project, I tried to make every
item of every collection "self-contained," with all the fields having all the values,
without a need to refer to preceding items
3. Avoiding `null` (with few exceptions) and missing fields, preferring empty arrays.
See <http://thecodelesscode.com/case/6> for the inspiration for this
4. Human-readable names for fields instead of cryptic abbreviations with no explanations
5. Documentation in a single file instead of browsing obscure pages across multiple sites.
In my opinion, the documentation is the weakest part of JMDict/JMnedict project
## Format
> See the [Format documentation][format] or [TypeScript types](node/packages/jmdict-simplified-types/index.ts)
Please also read the original documentation if you have more questions:
- [EDRDG wiki](https://www.edrdg.org/wiki/index.php/Main_Page)
- [JMdict][] (also [wiki](https://www.edrdg.org/wiki/index.php/JMdict-EDICT_Dictionary_Project))
- [JMnedict][]
- [Kanjidic][]
- [RADKFILE/KRADFILE][Kradfile]
There are also Kotlin types, although they contain some methods and annotations you might not need.
- [JMdictJsonElement.kt](src/main/kotlin/org/edrdg/jmdict/simplified/conversion/jmdict/JMdictJsonElement.kt)
- [JMnedictJsonElement.kt](src/main/kotlin/org/edrdg/jmdict/simplified/conversion/jmnedict/JMnedictJsonElement.kt)
- [Kanjidic2JsonElement.kt](src/main/kotlin/org/edrdg/jmdict/simplified/conversion/kanjidic/Kanjidic2JsonElement.kt)
## Full, "common-only", with examples, and language-specific versions
There are three main types of JSON files for the JMdict dictionary:
- full - same as original files, with no omissions of entries
- "common-only" - containing only dictionary entries considered "common" -
if any of `/k_ele/ke_pri` or `/r_ele/re_pri` elements in XML files contain
one of these markers: "news1", "ichi1", "spec1", "spec2", "gai1".
Only one such element is enough for the whole word to be considered common.
This corresponds to how online dictionaries such as <https://jisho.org>
classify words as "common". Common-only distributions are much smaller.
They are marked with "common" keyword in file names, see the [latest release][latest-release]
- with example sentences (built from JMdict_e_examp.xml source file) - English-only version
with example sentences from Tanaka corpus maintained by <https://tatoeba.org>.
This version doesn't have a full support in this project: NPM libraries do not provide
parsers and type definitions
Also, JMdict and Kanjidic have language-specific versions with language codes
(3-letter [ISO 639-2](https://en.wikipedia.org/wiki/ISO_639-2) codes for JMdict,
2-letter [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) codes for Kanjidic) in file names:
- `all` - all languages, i.e. no language filter was applied
- `eng`/`en` - English
- `ger`/`de` - German
- `rus`/`ru` - Russian
- `hun`/`hu` - Hungarian
- `dut`/`nl` - Dutch
- `spa`/`es` - Spanish
- `fre`/`fr` - French
- `swe`/`sv` - Swedish
- `slv`/`sl` - Slovenian
JMnedict and JMdict with examples have only one respective version each,
since they are both English-only, and JMnedict has no "common" indicators on entries.
## Requirements for running the conversion script
- Java 17 (JRE only, JDK is not necessary) - you can use [Azul Zulu OpenJDK][AzulJava17]
You don't need to install Gradle, just use the Gradle wrapper provided in this repository:
`gradlew` (for Linux/Mac) or `gradlew.bat` (for Windows)
## Converting XML dictionaries
NOTE: You can grab the pre-built JSON files in the [latest release][latest-release]
Use included scripts: `gradlew` (for Linux/macOS) or `gradlew.bat` (for Windows).
Tasks to convert dictionary files and create distribution archives:
- `./gradlew clean` - clean all build artifacts to start a fresh build,
in cases when you need to re-download and convert from scratch
- `./gradlew download` - download and extract original dictionary XML files into `build/dict-xml`
- `./gradlew convert` - convert all dictionaries to JSON and place into `build/dict-json`
- `./gradlew archive` - create distribution archives (zip, tar+gzip) in `build/distributions`
Utility tasks (for CI/CD workflows):
- `./gradlew --quiet jmdictHasChanged`, `./gradlew --quiet jmnedictHasChanged`,
and `./gradlew --quiet kanjidicHasChanged`- check if dictionary files have changed
by comparing checksums of downloaded files with those stored in the [checksums](checksums).
Outputs `YES` or `NO`. Run this only after `download` task!
The `--quiet` is to silence Gradle logs, e.g. when you need to put values into environments variables.
- `./gradlew updateChecksums` - update checksum files in the [checksums](checksums) directory.
Run after creating distribution archives and commit checksum files into the repository,
so that next time CI/CD workflow knows if it needs to rebuild anything.
- `./gradlew uberJar` - create an Uber JAR for standalone use (i.e. w/o Gradle).
The JAR program shows help messages and should be intuitive to use if you know how to run it.
For the full list of available tasks, run `./gradlew tasks`
## Troubleshooting
- Make sure to run tasks in order: `download` -> `convert` -> `archive`
- If running Gradle fails, make sure `java` is available on your `$PATH` environment variable
- Run Gradle with `--stacktrace`, `--info`, or `--debug` arguments to see more details
if you get an error
## License
### JMdict and JMnedict
The original XML files - **JMdict.xml**, **JMdict_e.xml**, **JMdict_e_examp.xml**,and **JMnedict.xml** -
are the property of the Electronic Dictionary Research and Development Group,
and are used in conformance with the Group's [license][EDRDG-license].
Project started in 1991 by Jim Breen.
All derived files are distributed under the same license, as the original license requires it.
### Kanjidic
The original **kanjidic2.xml** file is released under
[Creative Commons Attribution-ShareAlike License v4.0][CC-BY-SA-4].
See the [Copyright and Permissions](https://www.edrdg.org/wiki/index.php/KANJIDIC_Project#Copyright_and_Permissions)
section on the Kanjidic wiki for details.
All derived files are distributed under the same license, as the original license requires it.
### RADKFILE/KRADFILE
The RADKFILE and KRADFILE files are copyright and available under the [EDRDG Licence][EDRDG-license].
The copyright of the RADKFILE2 and KRADFILE2 files is held by Jim Rose.
### NPM packages
NPM packages [`@scriptin/jmdict-simplified-types`][npm-types] and
[`@scriptin/jmdict-simplified-loader`][npm-loader] are available under [MIT license][MIT].
### Other files
The source code and other files of this project, excluding the files and packages mentioned above,
are available under [Creative Commons Attribution-ShareAlike License v4.0][CC-BY-SA-4].
See [LICENSE.txt](LICENSE.txt)
[JMdict]: http://www.edrdg.org/jmdict/j_jmdict.html
[JMnedict]: http://www.edrdg.org/enamdict/enamdict_doc.html
[Kanjidic]: https://www.edrdg.org/wiki/index.php/KANJIDIC_Project
[Kradfile]: https://www.edrdg.org/krad/kradinf.html
[latest-release]: https://github.com/scriptin/jmdict-simplified/releases/latest
[format]: https://scriptin.github.io/jmdict-simplified/
[npm-types]: https://www.npmjs.com/package/@scriptin/jmdict-simplified-types
[npm-loader]: https://www.npmjs.com/package/@scriptin/jmdict-simplified-loader
[AzulJava17]: https://www.azul.com/downloads/?version=java-17-lts&package=jre
[EDRDG-license]: http://www.edrdg.org/edrdg/licence.html
[CC-BY-SA-4]: http://creativecommons.org/licenses/by-sa/4.0/
[MIT]: https://opensource.org/license/mit/
|
[
"Multilinguality"
] |
[
"Annotation and Dataset Development",
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/FooSoft/yomichan
|
2016-03-17T03:15:49Z
|
Japanese pop-up dictionary extension for Chrome and Firefox.
|
FooSoft / yomichan
Public archive
33 Branches
127 Tags
Go to file
Go to file
Code
FooSoft Update maintanence info
2f13cf5 · last year
.github
Update node version and depe…
2 years ago
dev
Change URL Firefox testing bui…
2 years ago
dl
Remove updates.json from ma…
2 years ago
docs
Update browser bugs (#2189)
2 years ago
ext
Update version
2 years ago
img
Add site metadata, update REA…
3 years ago
resources
Structured content links (#2089)
2 years ago
test
DocumentUtil static (#2232)
2 years ago
.eslintignore
Ignore file updates (#2252)
2 years ago
.eslintrc.json
Lint updates (#2247)
2 years ago
.gitattributes
Refactor Translator and diction…
3 years ago
.gitignore
Ignore file updates (#2252)
2 years ago
.htmlvalidate.json
Html lint (#1336)
3 years ago
.htmlvalidateignore
Ignore file updates (#2252)
2 years ago
.stylelintignore
Ignore file updates (#2252)
2 years ago
.stylelintrc.json
Update node dependencies (#2…
2 years ago
CONTRIBUTING.md
Update CONTRIBUTING.md wi…
3 years ago
LICENSE
Update copyright date (#2062)
2 years ago
README.md
Update maintanence info
last year
build.bat
Build system updates (#839)
4 years ago
build.sh
Build system updates (#839)
4 years ago
package-lock.json
Dependency updates (#2253)
2 years ago
package.json
Dependency updates (#2253)
2 years ago
Note: this project is no longer maintained. Please see this post for more information.
Yomichan turns your web browser into a tool for building Japanese language literacy by helping you to decipher texts which would be
otherwise too difficult tackle. This extension is similar to Rikaichamp for Firefox and Rikaikun for Chrome, but it stands apart in its goal of being
an all-encompassing learning tool as opposed to a mere browser-based dictionary.
Yomichan provides advanced features not available in other browser-based dictionaries:
About
Japanese pop-up dictionary
extension for Chrome and Firefox.
foosoft.net/projects/yomichan
# language # firefox # chrome # extension
# translation # dictionaries # addon
# flashcards # dictionary # japanese
# japanese-language # anki # epwing
# yomichan # anki-deck
Readme
View license
Activity
1.1k stars
21 watching
230 forks
Report repository
Releases 60
22.10.23.0 (testing)
Latest
on Oct 24, 2022
+ 59 releases
Packages
No packages published
Contributors
25
+ 11 contributors
Languages
JavaScript 82.4%
HTML 10.7%
CSS 6.0%
Other 0.9%
This repository has been archived by the owner on Feb 25, 2023. It is now read-only.
Code
Issues
195
Pull requests
6
Actions
Projects
Wiki
Security
Insights
Yomichan
README
License
Interactive popup definition window for displaying search results.
On-demand audio playback for select dictionary definitions.
Kanji stroke order diagrams are just a click away for most characters.
Custom search page for easily executing custom search queries.
Support for multiple dictionary formats including EPWING via the Yomichan Import tool.
Automatic note creation for the Anki flashcard program via the AnkiConnect plugin.
Clean, modern code makes it easy for developers to contribute new features.
Installation
Dictionaries
Basic Usage
Custom Dictionaries
Anki Integration
Flashcard Configuration
Flashcard Creation
Keyboard Shortcuts
Frequently Asked Questions
Licenses
Third-Party Libraries
Yomichan comes in two flavors: stable and testing. Over the years, this extension has evolved to contain many complex features which have
become increasingly difficult to test across different browsers, versions, and environments. New changes are initially introduced into the testing
version, and after some time spent ensuring that they are relatively bug free, they will be promoted to the stable version. If you are technically
savvy and don't mind submitting issues on GitHub, try the testing version; otherwise, the stable version will be your best bet.
Google Chrome (stable or testing)
Mozilla Firefox (stable or testing)
Unlike Chrome, Firefox does not allow extensions meant for testing to be hosted in the marketplace. You will have to download a desired
version and side-load it yourself. You only need to do this once and will get updates automatically.
There are several free Japanese dictionaries available for Yomichan, with two of them having glossaries available in different languages. You
must download and import the dictionaries you wish to use in order to enable Yomichan definition lookups. If you have proprietary EPWING
dictionaries that you would like to use, check the Yomichan Import page to learn how to convert and import them into Yomichan.
Be aware that the non-English dictionaries contain fewer entries than their English counterparts. Even if your primary language is not English,
you may consider also importing the English version for better coverage.
JMdict (Japanese vocabulary)
jmdict_dutch.zip
jmdict_english.zip
jmdict_french.zip
jmdict_german.zip
jmdict_hungarian.zip
jmdict_russian.zip
jmdict_slovenian.zip
Table of Contents
Installation
Dictionaries
jmdict_spanish.zip
jmdict_swedish.zip
JMnedict (Japanese names)
jmnedict.zip
KireiCake (Japanese slang)
kireicake.zip
KANJIDIC (Japanese kanji)
kanjidic_english.zip
kanjidic_french.zip
kanjidic_portuguese.zip
kanjidic_spanish.zip
Innocent Corpus (Term and kanji frequencies across 5000+ novels)
innocent_corpus.zip
Kanjium (Pitch dictionary, see related project page for details)
kanjium_pitch_accents.zip
1. Click the
Yomichan button in the browser bar to open the quick-actions popup.
The
cog button will open the Settings page.
The
magnifying glass button will open the Search page.
The
question mark button will open the Information page.
The
profile button will appear when multiple profiles exist, allowing the current profile to be quickly changed.
2. Import the dictionaries you wish to use for term and kanji searches. If you do not have any dictionaries installed or enabled, Yomichan will
warn you that it is not ready for use by displaying an orange exclamation mark over its icon. This exclamation mark will disappear once
you have installed and enabled at least one dictionary.
3. Webpage text can be scanned by moving the cursor while holding a modifier key, which is Shift by default. If definitions are found for the
text at the cursor position, a popup window containing term definitions will open. This window can be dismissed by clicking anywhere
outside of it.
Basic Usage
4. Click on the
speaker button to hear the term pronounced by a native speaker. If an audio sample is not available, you will hear a short
click instead. You can configure the sources used to retrieve audio samples in the options page.
5. Click on individual kanji in the term definition results to view additional information about those characters, including stroke order
diagrams, readings, meanings, as well as other useful data.
Yomichan supports the use of custom dictionaries, including the esoteric but popular EPWING format. They were often utilized in portable
electronic dictionaries similar to the ones pictured below. These dictionaries are often sought after by language learners for their correctness
and excellent coverage of the Japanese language.
Unfortunately, as most of the dictionaries released in this format are proprietary, they are unable to be bundled with Yomichan. Instead, you will
need to procure these dictionaries yourself and import them using Yomichan Import. Check the project page for additional details.
Custom Dictionaries
Yomichan features automatic flashcard creation for Anki, a free application designed to help you retain knowledge. This feature requires the
prior installation of an Anki plugin called AnkiConnect. Check the respective project page for more information about how to set up this
software.
Before flashcards can be automatically created, you must configure the templates used to create term and/or kanji notes. If you are unfamiliar
with Anki deck and model management, this would be a good time to reference the Anki Manual. In short, you must specify what information
should be included in the flashcards that Yomichan creates through AnkiConnect.
Flashcard fields can be configured with the following steps:
1. Open the Yomichan options page and scroll down to the section labeled Anki Options.
2. Tick the checkbox labeled Enable Anki integration (Anki must be running with AnkiConnect installed).
3. Select the type of template to configure by clicking on either the Terms or Kanji tabs.
4. Select the Anki deck and model to use for new creating new flashcards of this type.
5. Fill the model fields with markers corresponding to the information you wish to include (several can be used at once). Advanced users can
also configure the actual Handlebars templates used to create the flashcard contents (this is strictly optional).
Marker
Description
{audio}
Audio sample of a native speaker's pronunciation in MP3 format (if available).
{clipboard-image}
An image which is stored in the system clipboard, if present.
{clipboard-text}
Text which is stored in the system clipboard, if present.
{cloze-body}
Raw, inflected term as it appeared before being reduced to dictionary form by Yomichan.
{cloze-prefix}
Fragment of the containing {sentence} starting at the beginning of {sentence} until the beginning of
{cloze-body} .
{cloze-suffix}
Fragment of the containing {sentence} starting at the end of {cloze-body} until the end of
{sentence} .
Anki Integration
Flashcard Configuration
Markers for Term Cards
Marker
Description
{conjugation}
Conjugation path from the raw inflected term to the source term.
{dictionary}
Name of the dictionary from which the card is being created (unavailable in grouped mode).
{document-title}
Title of the web page that the term appeared in.
{expression}
Term expressed as kanji (will be displayed in kana if kanji is not available).
{frequencies}
Frequency information for the term.
{furigana}
Term expressed as kanji with furigana displayed above it (e.g. 日本語).
にほんご
{furigana-plain}
Term expressed as kanji with furigana displayed next to it in brackets (e.g. 日本語[にほんご]).
{glossary}
List of definitions for the term (output format depends on whether running in grouped mode).
{glossary-brief}
List of definitions for the term in a more compact format.
{glossary-no-
dictionary}
List of definitions for the term, except the dictionary tag is omitted.
{part-of-speech}
Part of speech information for the term.
{pitch-accents}
List of pitch accent downstep notations for the term.
{pitch-accent-graphs}
List of pitch accent graphs for the term.
{pitch-accent-
positions}
List of accent downstep positions for the term as a number.
{reading}
Kana reading for the term (empty for terms where the expression is the reading).
{screenshot}
Screenshot of the web page taken at the time the term was added.
{search-query}
The full search query shown on the search page.
{selection-text}
The selected text on the search page or popup.
{sentence}
Sentence, quote, or phrase that the term appears in from the source content.
{sentence-furigana}
Sentence, quote, or phrase that the term appears in from the source content, with furigana added.
{tags}
Grammar and usage tags providing information about the term (unavailable in grouped mode).
{url}
Address of the web page in which the term appeared in.
Marker
Description
{character}
Unicode glyph representing the current kanji.
{clipboard-image}
An image which is stored in the system clipboard, if present.
{clipboard-text}
Text which is stored in the system clipboard, if present.
{cloze-body}
Raw, inflected parent term as it appeared before being reduced to dictionary form by Yomichan.
{cloze-prefix}
Fragment of the containing {sentence} starting at the beginning of {sentence} until the beginning of
{cloze-body} .
{cloze-suffix}
Fragment of the containing {sentence} starting at the end of {cloze-body} until the end of {sentence} .
{dictionary}
Name of the dictionary from which the card is being created.
{document-title}
Title of the web page that the kanji appeared in.
{frequencies}
Frequency information for the kanji.
{glossary}
List of definitions for the kanji.
{kunyomi}
Kunyomi (Japanese reading) for the kanji expressed as katakana.
{onyomi}
Onyomi (Chinese reading) for the kanji expressed as hiragana.
{screenshot}
Screenshot of the web page taken at the time the kanji was added.
{search-query}
The full search query shown on the search page.
Markers for Kanji Cards
Marker
Description
{selection-text}
The selected text on the search page or popup.
{sentence}
Sentence, quote, or phrase that the character appears in from the source content.
{sentence-
furigana}
Sentence, quote, or phrase that the character appears in from the source content, with furigana added.
{stroke-count}
Number of strokes that the kanji character has.
{url}
Address of the web page in which the kanji appeared in.
When creating your model for Yomichan, make sure that you pick a unique field to be first; fields that will contain {expression} or
{character} are ideal candidates for this. Anki does not allow duplicate flashcards to be added to a deck by default; it uses the first field in
the model to check for duplicates. For example, if you have {reading} configured to be the first field in your model and 橋 is already in your
はし
deck, you will not be able to create a flashcard for 箸 because they share the same reading.
はし
Once Yomichan is configured, it becomes trivial to create new flashcards with a single click. You will see the following icons next to term
definitions:
Clicking
adds the current expression as kanji (e.g. 食べる).
Clicking
adds the current expression as hiragana or katakana (e.g. たべる).
Below are some troubleshooting tips you can try if you are unable to create new flashcards:
Individual icons will appear grayed out if a flashcard cannot be created for the current definition (e.g. it already exists in the deck).
If all of the buttons appear grayed out, then you should double-check your deck and model configuration settings.
If no icons appear at all, make sure that Anki is running in the background and that AnkiConnect has been installed.
The following shortcuts are globally available:
Shortcut
Action
Alt + Insert
Open search page.
Alt + Delete
Toggle extension on/off.
The following shortcuts are available on search results:
Shortcut
Action
Esc
Cancel current search.
Alt + PgUp
Page up through results.
Alt + PgDn
Page down through results.
Alt + End
Go to last result.
Alt + Home
Go to first result.
Alt + Up
Go to previous result.
Alt + Down
Go to next result.
Alt + b
Go to back to source term.
Alt + e
Add current term as expression to Anki.
Alt + r
Add current term as reading to Anki.
Alt + p
Play audio for current term.
Alt + k
Add current kanji to Anki.
I'm having problems importing dictionaries in Firefox, what do I do?
Flashcard Creation
Keyboard Shortcuts
Frequently Asked Questions
Yomichan uses the cross-browser IndexedDB system for storing imported dictionary data into your user profile. Although everything "just
works" in Chrome, depending on settings, Firefox users can run into problems due to browser bugs. Yomichan catches errors and tries to offer
suggestions about how to work around Firefox issues, but in general at least one of the following solutions should work for you:
Make sure you have cookies enabled. It appears that disabling them also disables IndexedDB for some reason. You can still have cookies
be disabled on other sites; just make sure to add the Yomichan extension to the whitelist of whatever tool you are using to restrict cookies.
You can get the extension "URL" by looking at the address bar when you have the search page open.
Make sure that you have sufficient disk space available on the drive Firefox uses to store your user profile. Firefox limits the amount of
space that can be used by IndexedDB to a small fraction of the disk space actually available on your computer.
Make sure that you have history set to "Remember history" enabled in your privacy settings. When this option is set to "Never remember
history", IndexedDB access is once again disabled for an inexplicable reason.
As a last resort, try using the Refresh Firefox feature to reset your user profile. It appears that the Firefox profile system can corrupt itself
preventing IndexedDB from being accessible to Yomichan.
Will you add support for online dictionaries?
Online dictionaries will not be implemented because it is not possible to support them in a robust way. In order to perform Japanese
deinflection, Yomichan must execute dozens of database queries for every single word. Factoring in network latency and the fragility of web
scraping, it would not be possible to maintain a good and consistent user experience.
Is it possible to use Yomichan with files saved locally on my computer with Chrome?
In order to use Yomichan with local files in Chrome, you must first tick the Allow access to file URLs checkbox for Yomichan on the extensions
page. Due to the restrictions placed on browser addons in the WebExtensions model, it will likely never be possible to use Yomichan with PDF
files.
Is it possible to delete individual dictionaries without purging the database?
Yomichan is able to delete individual dictionaries, but keep in mind that this process can be very slow and can cause the browser to become
unresponsive. The time it takes to delete a single dictionary can sometimes be roughly the same as the time it originally took to import, which
can be significant for certain large dictionaries.
Why aren't EPWING dictionaries bundled with Yomichan?
The vast majority of EPWING dictionaries are proprietary, so they are unfortunately not able to be included in this extension due to copyright
reasons.
When are you going to add support for $MYLANGUAGE?
Developing Yomichan requires a decent understanding of Japanese sentence structure and grammar, and other languages are likely to have
their own unique set of rules for syntax, grammar, inflection, and so on. Supporting additional languages would not only require many
additional changes to the codebase, it would also incur significant maintenance overhead and knowledge demands for the developers.
|
# Yomichan
*Note: this project is no longer maintained. Please see [this
post](https://foosoft.net/posts/sunsetting-the-yomichan-project/) for more information.*
Yomichan turns your web browser into a tool for building Japanese language literacy by helping you to decipher texts
which would be otherwise too difficult tackle. This extension is similar to
[Rikaichamp](https://addons.mozilla.org/en-US/firefox/addon/rikaichamp/) for Firefox and
[Rikaikun](https://chrome.google.com/webstore/detail/rikaikun/jipdnfibhldikgcjhfnomkfpcebammhp?hl=en) for Chrome, but it
stands apart in its goal of being an all-encompassing learning tool as opposed to a mere browser-based dictionary.
Yomichan provides advanced features not available in other browser-based dictionaries:
* Interactive popup definition window for displaying search results.
* On-demand audio playback for select dictionary definitions.
* Kanji stroke order diagrams are just a click away for most characters.
* Custom search page for easily executing custom search queries.
* Support for multiple dictionary formats including [EPWING](https://ja.wikipedia.org/wiki/EPWING) via the [Yomichan Import](https://foosoft.net/projects/yomichan-import) tool.
* Automatic note creation for the [Anki](https://apps.ankiweb.net/) flashcard program via the [AnkiConnect](https://foosoft.net/projects/anki-connect) plugin.
* Clean, modern code makes it easy for developers to [contribute](https://github.com/FooSoft/yomichan/blob/master/CONTRIBUTING.md) new features.
[](img/ss-terms.png)
[](img/ss-kanji.png)
[](img/ss-dictionaries.png)
[](img/ss-anki.png)
## Table of Contents
* [Installation](#installation)
* [Dictionaries](#dictionaries)
* [Basic Usage](#basic-usage)
* [Custom Dictionaries](#custom-dictionaries)
* [Anki Integration](#anki-integration)
* [Flashcard Configuration](#flashcard-configuration)
* [Flashcard Creation](#flashcard-creation)
* [Keyboard Shortcuts](#keyboard-shortcuts)
* [Frequently Asked Questions](#frequently-asked-questions)
* [Licenses](#licenses)
* [Third-Party Libraries](#third-party-libraries)
## Installation
Yomichan comes in two flavors: *stable* and *testing*. Over the years, this extension has evolved to contain many
complex features which have become increasingly difficult to test across different browsers, versions, and environments.
New changes are initially introduced into the *testing* version, and after some time spent ensuring that they are
relatively bug free, they will be promoted to the *stable* version. If you are technically savvy and don't mind
submitting issues on GitHub, try the *testing* version; otherwise, the *stable* version will be your best bet.
* **Google Chrome**
([stable](https://chrome.google.com/webstore/detail/yomichan/ogmnaimimemjmbakcfefmnahgdfhfami) or [testing](https://chrome.google.com/webstore/detail/yomichan-testing/bcknnfebhefllbjhagijobjklocakpdm)) \
[](https://chrome.google.com/webstore/detail/yomichan/ogmnaimimemjmbakcfefmnahgdfhfami)
* **Mozilla Firefox**
([stable](https://addons.mozilla.org/en-US/firefox/addon/yomichan/) or [testing](https://github.com/FooSoft/yomichan/releases)<sup>*</sup>) \
[](https://addons.mozilla.org/en-US/firefox/addon/yomichan/) \
<sup>*</sup>Unlike Chrome, Firefox does not allow extensions meant for testing to be hosted in the marketplace.
You will have to download a desired version and side-load it yourself. You only need to do this once and will get
updates automatically.
## Dictionaries
There are several free Japanese dictionaries available for Yomichan, with two of them having glossaries available in
different languages. You must download and import the dictionaries you wish to use in order to enable Yomichan
definition lookups. If you have proprietary EPWING dictionaries that you would like to use, check the [Yomichan
Import](https://foosoft.net/projects/yomichan-import) page to learn how to convert and import them into Yomichan.
Be aware that the non-English dictionaries contain fewer entries than their English counterparts. Even if your primary
language is not English, you may consider also importing the English version for better coverage.
* **[JMdict](https://www.edrdg.org/jmdict/edict_doc.html)** (Japanese vocabulary)
* [jmdict\_dutch.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_dutch.zip)
* [jmdict\_english.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_english.zip)
* [jmdict\_french.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_french.zip)
* [jmdict\_german.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_german.zip)
* [jmdict\_hungarian.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_hungarian.zip)
* [jmdict\_russian.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_russian.zip)
* [jmdict\_slovenian.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_slovenian.zip)
* [jmdict\_spanish.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_spanish.zip)
* [jmdict\_swedish.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmdict_swedish.zip)
* **[JMnedict](https://www.edrdg.org/enamdict/enamdict_doc.html)** (Japanese names)
* [jmnedict.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/jmnedict.zip)
* **[KireiCake](https://kireicake.com/rikaicakes/)** (Japanese slang)
* [kireicake.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/kireicake.zip)
* **[KANJIDIC](http://nihongo.monash.edu/kanjidic2/index.html)** (Japanese kanji)
* [kanjidic\_english.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/kanjidic_english.zip)
* [kanjidic\_french.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/kanjidic_french.zip)
* [kanjidic\_portuguese.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/kanjidic_portuguese.zip)
* [kanjidic\_spanish.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/kanjidic_spanish.zip)
* **[Innocent Corpus](https://web.archive.org/web/20190309073023/https://forum.koohii.com/thread-9459.html#pid168613)** (Term and kanji frequencies across 5000+ novels)
* [innocent\_corpus.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/innocent_corpus.zip)
* **[Kanjium](https://github.com/mifunetoshiro/kanjium)** (Pitch dictionary, see [related project page](https://github.com/toasted-nutbread/yomichan-pitch-accent-dictionary) for details)
* [kanjium_pitch_accents.zip](https://github.com/FooSoft/yomichan/raw/dictionaries/kanjium_pitch_accents.zip)
## Basic Usage
1. Click the <img src="ext/images/yomichan-icon.svg" alt="" width="16" height="16"> _Yomichan_ button in the browser bar to open the quick-actions popup.
<img src="resources/images/browser-action-popup1.png" alt="">
* The <img src="ext/images/cog.svg" alt="" width="16" height="16"> _cog_ button will open the Settings page.
* The <img src="ext/images/magnifying-glass.svg" alt="" width="16" height="16"> _magnifying glass_ button will open the Search page.
* The <img src="ext/images/question-mark-circle.svg" alt="" width="16" height="16"> _question mark_ button will open the Information page.
* The <img src="ext/images/profile.svg" alt="" width="16" height="16"> _profile_ button will appear when multiple profiles exist, allowing the current profile to be quickly changed.
2. Import the dictionaries you wish to use for term and kanji searches. If you do not have any dictionaries installed
or enabled, Yomichan will warn you that it is not ready for use by displaying an orange exclamation mark over its
icon. This exclamation mark will disappear once you have installed and enabled at least one dictionary.
<img src="resources/images/settings-dictionaries-popup.png" alt="">
3. Webpage text can be scanned by moving the cursor while holding a modifier key, which is <kbd>Shift</kbd>
by default. If definitions are found for the text at the cursor position, a popup window containing term definitions
will open. This window can be dismissed by clicking anywhere outside of it.
<img src="resources/images/search-popup-terms.png" alt="">
4. Click on the <img src="ext/images/play-audio.svg" alt="" width="16" height="16"> _speaker_ button to hear the term pronounced by a native speaker. If an audio sample is
not available, you will hear a short click instead. You can configure the sources used to retrieve audio samples in
the options page.
5. Click on individual kanji in the term definition results to view additional information about those characters,
including stroke order diagrams, readings, meanings, as well as other useful data.
<img src="resources/images/search-popup-kanji.png" alt="">
## Custom Dictionaries
Yomichan supports the use of custom dictionaries, including the esoteric but popular
[EPWING](https://ja.wikipedia.org/wiki/EPWING) format. They were often utilized in portable electronic dictionaries
similar to the ones pictured below. These dictionaries are often sought after by language learners for their correctness
and excellent coverage of the Japanese language.
Unfortunately, as most of the dictionaries released in this format are proprietary, they are unable to be bundled with
Yomichan. Instead, you will need to procure these dictionaries yourself and import them using [Yomichan
Import](https://foosoft.net/projects/yomichan-import). Check the project page for additional details.

## Anki Integration
Yomichan features automatic flashcard creation for [Anki](https://apps.ankiweb.net/), a free application designed to help you
retain knowledge. This feature requires the prior installation of an Anki plugin called [AnkiConnect](https://foosoft.net/projects/anki-connect).
Check the respective project page for more information about how to set up this software.
### Flashcard Configuration
Before flashcards can be automatically created, you must configure the templates used to create term and/or kanji notes.
If you are unfamiliar with Anki deck and model management, this would be a good time to reference the [Anki
Manual](https://docs.ankiweb.net/#/). In short, you must specify what information should be included in the
flashcards that Yomichan creates through AnkiConnect.
Flashcard fields can be configured with the following steps:
1. Open the Yomichan options page and scroll down to the section labeled *Anki Options*.
2. Tick the checkbox labeled *Enable Anki integration* (Anki must be running with [AnkiConnect](https://foosoft.net/projects/anki-connect) installed).
3. Select the type of template to configure by clicking on either the *Terms* or *Kanji* tabs.
4. Select the Anki deck and model to use for new creating new flashcards of this type.
5. Fill the model fields with markers corresponding to the information you wish to include (several can be used at
once). Advanced users can also configure the actual [Handlebars](https://handlebarsjs.com/) templates used to create
the flashcard contents (this is strictly optional).
#### Markers for Term Cards
Marker | Description
-------|------------
`{audio}` | Audio sample of a native speaker's pronunciation in MP3 format (if available).
`{clipboard-image}` | An image which is stored in the system clipboard, if present.
`{clipboard-text}` | Text which is stored in the system clipboard, if present.
`{cloze-body}` | Raw, inflected term as it appeared before being reduced to dictionary form by Yomichan.
`{cloze-prefix}` | Fragment of the containing `{sentence}` starting at the beginning of `{sentence}` until the beginning of `{cloze-body}`.
`{cloze-suffix}` | Fragment of the containing `{sentence}` starting at the end of `{cloze-body}` until the end of `{sentence}`.
`{conjugation}` | Conjugation path from the raw inflected term to the source term.
`{dictionary}` | Name of the dictionary from which the card is being created (unavailable in *grouped* mode).
`{document-title}` | Title of the web page that the term appeared in.
`{expression}` | Term expressed as kanji (will be displayed in kana if kanji is not available).
`{frequencies}` | Frequency information for the term.
`{furigana}` | Term expressed as kanji with furigana displayed above it (e.g. <ruby>日本語<rt>にほんご</rt></ruby>).
`{furigana-plain}` | Term expressed as kanji with furigana displayed next to it in brackets (e.g. 日本語[にほんご]).
`{glossary}` | List of definitions for the term (output format depends on whether running in *grouped* mode).
`{glossary-brief}` | List of definitions for the term in a more compact format.
`{glossary-no-dictionary}` | List of definitions for the term, except the dictionary tag is omitted.
`{part-of-speech}` | Part of speech information for the term.
`{pitch-accents}` | List of pitch accent downstep notations for the term.
`{pitch-accent-graphs}` | List of pitch accent graphs for the term.
`{pitch-accent-positions}` | List of accent downstep positions for the term as a number.
`{reading}` | Kana reading for the term (empty for terms where the expression is the reading).
`{screenshot}` | Screenshot of the web page taken at the time the term was added.
`{search-query}` | The full search query shown on the search page.
`{selection-text}` | The selected text on the search page or popup.
`{sentence}` | Sentence, quote, or phrase that the term appears in from the source content.
`{sentence-furigana}` | Sentence, quote, or phrase that the term appears in from the source content, with furigana added.
`{tags}` | Grammar and usage tags providing information about the term (unavailable in *grouped* mode).
`{url}` | Address of the web page in which the term appeared in.
#### Markers for Kanji Cards
Marker | Description
-------|------------
`{character}` | Unicode glyph representing the current kanji.
`{clipboard-image}` | An image which is stored in the system clipboard, if present.
`{clipboard-text}` | Text which is stored in the system clipboard, if present.
`{cloze-body}` | Raw, inflected parent term as it appeared before being reduced to dictionary form by Yomichan.
`{cloze-prefix}` | Fragment of the containing `{sentence}` starting at the beginning of `{sentence}` until the beginning of `{cloze-body}`.
`{cloze-suffix}` | Fragment of the containing `{sentence}` starting at the end of `{cloze-body}` until the end of `{sentence}`.
`{dictionary}` | Name of the dictionary from which the card is being created.
`{document-title}` | Title of the web page that the kanji appeared in.
`{frequencies}` | Frequency information for the kanji.
`{glossary}` | List of definitions for the kanji.
`{kunyomi}` | Kunyomi (Japanese reading) for the kanji expressed as katakana.
`{onyomi}` | Onyomi (Chinese reading) for the kanji expressed as hiragana.
`{screenshot}` | Screenshot of the web page taken at the time the kanji was added.
`{search-query}` | The full search query shown on the search page.
`{selection-text}` | The selected text on the search page or popup.
`{sentence}` | Sentence, quote, or phrase that the character appears in from the source content.
`{sentence-furigana}` | Sentence, quote, or phrase that the character appears in from the source content, with furigana added.
`{stroke-count}` | Number of strokes that the kanji character has.
`{url}` | Address of the web page in which the kanji appeared in.
When creating your model for Yomichan, *make sure that you pick a unique field to be first*; fields that will
contain `{expression}` or `{character}` are ideal candidates for this. Anki does not allow duplicate flashcards to be
added to a deck by default; it uses the first field in the model to check for duplicates. For example, if you have `{reading}`
configured to be the first field in your model and <ruby>橋<rt>はし</rt></ruby> is already in your deck, you will not
be able to create a flashcard for <ruby>箸<rt>はし</rt></ruby> because they share the same reading.
### Flashcard Creation
Once Yomichan is configured, it becomes trivial to create new flashcards with a single click. You will see the following
icons next to term definitions:
* Clicking  adds the current expression as kanji (e.g. 食べる).
* Clicking  adds the current expression as hiragana or katakana (e.g. たべる).
Below are some troubleshooting tips you can try if you are unable to create new flashcards:
* Individual icons will appear grayed out if a flashcard cannot be created for the current definition (e.g. it already exists in the deck).
* If all of the buttons appear grayed out, then you should double-check your deck and model configuration settings.
* If no icons appear at all, make sure that Anki is running in the background and that [AnkiConnect](https://foosoft.net/projects/anki-connect) has been installed.
## Keyboard Shortcuts
The following shortcuts are globally available:
Shortcut | Action
---------|-------
<kbd>Alt</kbd> + <kbd>Insert</kbd> | Open search page.
<kbd>Alt</kbd> + <kbd>Delete</kbd> | Toggle extension on/off.
The following shortcuts are available on search results:
Shortcut | Action
---------|-------
<kbd>Esc</kbd> | Cancel current search.
<kbd>Alt</kbd> + <kbd>PgUp</kbd> | Page up through results.
<kbd>Alt</kbd> + <kbd>PgDn</kbd> | Page down through results.
<kbd>Alt</kbd> + <kbd>End</kbd> | Go to last result.
<kbd>Alt</kbd> + <kbd>Home</kbd> | Go to first result.
<kbd>Alt</kbd> + <kbd>Up</kbd> | Go to previous result.
<kbd>Alt</kbd> + <kbd>Down</kbd> | Go to next result.
<kbd>Alt</kbd> + <kbd>b</kbd> | Go to back to source term.
<kbd>Alt</kbd> + <kbd>e</kbd> | Add current term as expression to Anki.
<kbd>Alt</kbd> + <kbd>r</kbd> | Add current term as reading to Anki.
<kbd>Alt</kbd> + <kbd>p</kbd> | Play audio for current term.
<kbd>Alt</kbd> + <kbd>k</kbd> | Add current kanji to Anki.
## Frequently Asked Questions
**I'm having problems importing dictionaries in Firefox, what do I do?**
Yomichan uses the cross-browser IndexedDB system for storing imported dictionary data into your user profile. Although
everything "just works" in Chrome, depending on settings, Firefox users can run into problems due to browser bugs.
Yomichan catches errors and tries to offer suggestions about how to work around Firefox issues, but in general at least
one of the following solutions should work for you:
* Make sure you have cookies enabled. It appears that disabling them also disables IndexedDB for some reason. You
can still have cookies be disabled on other sites; just make sure to add the Yomichan extension to the whitelist of
whatever tool you are using to restrict cookies. You can get the extension "URL" by looking at the address bar when
you have the search page open.
* Make sure that you have sufficient disk space available on the drive Firefox uses to store your user profile.
Firefox limits the amount of space that can be used by IndexedDB to a small fraction of the disk space actually
available on your computer.
* Make sure that you have history set to "Remember history" enabled in your privacy settings. When this option is
set to "Never remember history", IndexedDB access is once again disabled for an inexplicable reason.
* As a last resort, try using the [Refresh Firefox](https://support.mozilla.org/en-US/kb/reset-preferences-fix-problems)
feature to reset your user profile. It appears that the Firefox profile system can corrupt itself preventing
IndexedDB from being accessible to Yomichan.
**Will you add support for online dictionaries?**
Online dictionaries will not be implemented because it is not possible to support them in a robust way. In order to
perform Japanese deinflection, Yomichan must execute dozens of database queries for every single word. Factoring in
network latency and the fragility of web scraping, it would not be possible to maintain a good and consistent user
experience.
**Is it possible to use Yomichan with files saved locally on my computer with Chrome?**
In order to use Yomichan with local files in Chrome, you must first tick the *Allow access to file URLs* checkbox
for Yomichan on the extensions page. Due to the restrictions placed on browser addons in the WebExtensions model, it
will likely never be possible to use Yomichan with PDF files.
**Is it possible to delete individual dictionaries without purging the database?**
Yomichan is able to delete individual dictionaries, but keep in mind that this process can be *very* slow and can
cause the browser to become unresponsive. The time it takes to delete a single dictionary can sometimes be roughly
the same as the time it originally took to import, which can be significant for certain large dictionaries.
**Why aren't EPWING dictionaries bundled with Yomichan?**
The vast majority of EPWING dictionaries are proprietary, so they are unfortunately not able to be included in
this extension due to copyright reasons.
**When are you going to add support for $MYLANGUAGE?**
Developing Yomichan requires a decent understanding of Japanese sentence structure and grammar, and other languages
are likely to have their own unique set of rules for syntax, grammar, inflection, and so on. Supporting additional
languages would not only require many additional changes to the codebase, it would also incur significant maintenance
overhead and knowledge demands for the developers. Therefore, suggestions and contributions for supporting
new languages will be declined, allowing Yomichan's focus to remain Japanese-centric.
## Licenses
Required licensing notices for this project follow below:
* **EDRDG License** \
This package uses the [EDICT](https://www.edrdg.org/jmdict/edict.html) and
[KANJIDIC](https://www.edrdg.org/wiki/index.php/KANJIDIC_Project) dictionary files. These files are the property of
the [Electronic Dictionary Research and Development Group](https://www.edrdg.org/), and are used in conformance with
the Group's [license](https://www.edrdg.org/edrdg/licence.html).
* **Kanjium License** \
The pitch accent notation, verb particle data, phonetics, homonyms and other additions or modifications to EDICT,
KANJIDIC or KRADFILE were provided by Uros Ozvatic through his free database.
## Third-Party Libraries
Yomichan uses several third-party libraries to function. Below are links to homepages, snapshots, and licenses of the exact
versions packaged.
* Handlebars: [homepage](https://handlebarsjs.com/) - [snapshot](https://s3.amazonaws.com/builds.handlebarsjs.com/handlebars.min-v4.7.7.js) - [license](https://github.com/handlebars-lang/handlebars.js/blob/v4.7.7/LICENSE)
* JSZip: [homepage](https://stuk.github.io/jszip/) - [snapshot](https://github.com/Stuk/jszip/blob/v3.9.1/dist/jszip.min.js) - [license](https://github.com/Stuk/jszip/blob/v3.9.1/LICENSE.markdown)
* WanaKana: [homepage](https://wanakana.com/) - [snapshot](https://unpkg.com/[email protected]/umd/wanakana.min.js) - [license](https://github.com/WaniKani/WanaKana/blob/4.0.2/LICENSE)
* parse5: [homepage](https://github.com/inikulin/parse5) - [snapshot](https://github.com/inikulin/parse5/tree/v7.1.1/packages/parse5) - [license](https://github.com/inikulin/parse5/blob/v7.1.1/LICENSE) _(Only used in MV3 build)_
|
[
"Multilinguality",
"Syntactic Text Processing"
] |
[
"Annotation and Dataset Development",
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/kariminf/jslingua
|
2016-03-22T10:05:12Z
|
Javascript libraries to process text: Arabic, Japanese, etc.
|
kariminf / jslingua
Public
2 Branches
11 Tags
Go to file
Go to file
Code
kariminf Merge pull request #74 from shilik/patch-6
4683b7a · last year
assets
embedded surface feat…
last year
package
embedded surface feat…
last year
src
Update jpn.morpho.mjs
last year
test
embedded surface feat…
last year
.codeclimate.yml
embedded surface feat…
last year
.eslintignore
embedded surface feat…
last year
.eslintrc.json
embedded surface feat…
last year
.gitignore
embedded surface feat…
last year
.nycrc
embedded surface feat…
last year
.travis.yml
embedded surface feat…
last year
CHANGELOG.md
embedded surface feat…
last year
CODE_CONVE…
embedded surface feat…
last year
CODE_OF_CO…
embedded surface feat…
last year
CONTRIBUTIN…
embedded surface feat…
last year
CREDITS.md
embedded surface feat…
last year
FCT.md
embedded surface feat…
last year
LICENSE
embedded surface feat…
last year
README.md
embedded surface feat…
last year
jsdoc.json
embedded surface feat…
last year
min.sh
changing readme
3 years ago
package.json
embedded surface feat…
last year
server
fix some broken links in…
6 years ago
webpack.config.js
embedded surface feat…
last year
About
Javascript libraries to process text:
Arabic, Japanese, etc.
# nodejs # javascript # nlp # language # npm
# frontend # toolkit # japanese # morphology
# transformations # transliteration
# morse-code # number-to-words # english
# conjugation # french # arabic # stemming
# verb-conjugation
Readme
Apache-2.0 license
Code of conduct
Activity
47 stars
5 watching
5 forks
Report repository
Releases 11
Version 0.12.2
Latest
on May 31, 2021
+ 10 releases
Packages
1
jslingua
Contributors
5
Languages
JavaScript 97.7%
HTML 2.0%
Other 0.3%
Code
Issues
11
Pull requests
Actions
Projects
Security
Insights
README
Code of conduct
Apache-2.0 license
Testing
Testing Click here
Click here License
License Apache 2.0
Apache 2.0
Travis downloads
downloads 96/month
96/month
Javascript library for language processing.
For now, there are 4 modules : Info, Lang, Trans and Morpho.
Detection of language's characters
Transforming numbers to strings (pronunciation)
JsLingua
Functionalities
Information about the language (Info)
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> JsLingua.version
0.13.0
> let AraInfo = JsLingua.gserv('info', 'ara')
undefined
> AraInfo.gname()
'Arabic'
> AraInfo.goname()
'عربية'
> AraInfo.gdir()
'rtl'
> AraInfo.gfamily()
'Afro-asiatic'
> AraInfo.gorder()
'vso'
> AraInfo.gbranch()
'Semitic'
Basic language functions (Lang)
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> let FraLang = JsLingua.gserv('lang', 'fra')
undefined
> FraLang.nbr2words(58912)
'cinquante-huit mille neuf cent douze'
> FraLang.lchars()
['BasicLatin', 'Latin-1Supplement']
> let verifyFcts = FraLang.gcharverify('BasicLatin')
undefined
> verifyFcts.every('un élève')
false
> verifyFcts.some('un élève')
true
Different morphological functions
Verb Conjugation
Stemming: deleting affixes (suffixes, prefixes and infixes)
Noun declension: from feminine to masculine and the inverse, from singular to plural, etc.
Text segmentation: text to sentences; sentences to words
Text normalization
Stop words filtering
> FraLang.schars('BasicLatin')
undefined
> FraLang.every('un élève')
false
> FraLang.some('un élève')
true
> FraLang.strans('min2maj')
undefined
> FraLang.trans('Un texte en minuscule : lowercase')
'UN TEXTE EN MINUSCULE : LOWERCASE'
> FraLang.strans('maj2min')
undefined
> FraLang.trans('UN TEXTE EN MAJUSCULE : uppercase')
'un texte en majuscule : uppercase'
>
Transliteration (Trans)
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> let JpnTrans = JsLingua.gserv('trans', 'jpn')
undefined
> JpnTrans.l()
[ 'hepburn', 'nihonshiki', 'kunreishiki', 'morse' ]
> JpnTrans.s('hepburn')
undefined
> JpnTrans.t('じゃ,しゃしん,いっぱい')
'ja,shashin,ippai'
> JpnTrans.u('ja,shashin,ippai')
'じゃ,しゃしん,いっぱい'
> JpnTrans.s('nihonshiki')
undefined
> JpnTrans.t('じゃ,しゃしん,いっぱい')
'zya,syasin,ippai'
> JpnTrans.u('zya,syasin,ippai')
'じゃ,しゃしん,いっぱい'
> JpnTrans.s('morse')
undefined
> JpnTrans.t('しゃしん')
'-..--- --.-. .-- --.-. .-.-. ...-.'
Morphology (Morpho)
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> let EngMorpho = JsLingua.gserv('morpho', 'eng')
undefined
> let forms = EngMorpho.lform()
undefined
To get the list of available functionalities, check FCT.md
To get tutorials Click here
You can either use it by downloading it via NPM or by using UNPKG CDN (content delivery network). There are two
versions:
One file containing all the functions at once (not very large): it is usefull if you want to use all the functions in your
website. Also, if you don't want to use ES6 asynchronuous import.
Many files : not recomanded since the files are not minimized.
Here an example using UNPKG CDN
Here an example when you want to select the services at execution time.
> Object.keys(forms)
[
'pres', 'past',
'fut', 'pres_perf',
'past_perf', 'fut_perf',
'pres_cont', 'past_cont',
'fut_cont', 'pres_perf_cont',
'past_perf_cont', 'fut_perf_cont'
]
> let I = {person:'first', number:'singular'}
undefined
> Morpho.conj('go', Object.assign({}, forms['past_perf'], I))
'had gone'
> Morpho.goptname('Pronoun', I)
'I'
> EngMorpho.lconv()
['sing2pl']
> EngMorpho.sconv('sing2pl')
undefined
> EngMorpho.conv('ox')
'oxen'
> EngMorpho.lstem()
[ 'porter', 'lancaster']
> EngMorpho.sstem('porter')
undefined
> EngMorpho.stem('formative')
'form'
> EngMorpho.norm("ain't")
'is not'
> EngMorpho.gsents('Where is Dr. Whatson? I cannot see him with Mr. Charloc.')
[
'Where is Dr. Whatson',
'?',
'I cannot see him with Mr. Charloc',
'.'
]
> EngMorpho.gwords('Where is Dr. Whatson')
[ 'Where', 'is', 'Dr.', 'Watson']
>
How to use?
Use in Browser
<script type="text/javascript" src="https://unpkg.com/jslingua@latest/dist/jslingua.js"></script>
<script type="text/javascript">
alert(JsLingua.version);
</script>
First of all, you have to install the package in your current project
Then, you can import it as follows :
You can call them one by one, if you know the services and their implemented languages. For example, if you want to use
Arabic implementation of "Morpho":
Or, you can just loop over the services and test available languages. For example, the "Info" service:
Check More section for more tutorials.
All the C's are here:
CREDITS : List of contributors
CONTRIBUTING : How to contribute to this project
CODE OF CONDUCT : Some recommendations must be followed for a healthy development environment.
CODE CONVENTION : Some rules to follow while coding
CHANGELOG : Changes in every version
<script type="module">
import JsLingua from "../../src/jslingua.mjs";
window.JsLingua = JsLingua;
</script>
<script type="text/javascript">
async function loadingAsync(){
await Promise.all([
JsLingua.load("[Service]", "[lang]"),
JsLingua.load("[Service]", "[lang]"),
...
]);
loading();
}
window.onload = loadingAsync;
</script>
Use in Node
npm install jslingua
let JsLingua = require("jslingua");
Get the services (Browser & Node)
//Get Arabic Morpho class
let AraMorpho = JsLingua.gserv("morpho", "ara");
//Get the list of languages codes which support the Info service
let langIDs = JsLingua.llang("info");
//Or: let langIDs = JsLingua.llang("info"); //list languages
let result = "";
for (let i = 0; i < langIDs.length; i++){
let infoClass = JsLingua.gserv("info", langIDs[i]);
result += i + "- " + infoClass.getName() + "\n";
}
Community
If you are looking to have fun, you are welcome to contribute. If you think this project must have a business plan, please
feel free to refer to this project (click)
You can test the browser version on https://kariminf.github.io/jslingua.web
You can test nodejs version online on https://runkit.com/npm/jslingua
jsdoc generated API is located in https://kariminf.github.io/jslingua.web/docs/
Examples on how to use JsLingua are located in https://github.com/kariminf/jslingua_docs
A tutorial on how to use JsLingua is located in https://github.com/kariminf/jslingua_docs/blob/master/doc/index.md
A Youtube tutorial for JsLingua and nodejs is located in this list: https://www.youtube.com/watch?
v=piAysG5W55A&list=PLMNbVokbNS0cIjZxF8AnmgDfmu3XXddeq
This project aims to afford some of the tasks related to languages, such as: detecting charsets, some transformations
(majuscule to minuscule), verb conjugation, etc. There are a lot of projects like this such as: NLTK (python), OpenNLP
(Java), etc. But, mostly, they are server side and needs some configurations before being put in action.
A lot of tasks doesn't need many resources such as stemming, tokenization, transliteration, etc. When we use these
toolkits in a web application, the server will do all of these tasks. Why not exploit the users machines to do such tasks, and
gain some benefits:
The server will be relieved to do some other serious tasks.
The number of communications will drop, resulting in a faster respond time which leads to a better user experience.
The amount of exchanged data may drop; this case is applicable when we want to send a big text, then we tokenize it,
stem it and remove stop words. This will decrease the size of data to be sent.
Easy to configure and to integrate into your web pages.
Also, it can be used in server side using node.js.
The project's ambitions are:
To deliver the maximum language-related tasks with a minimum of resources pushed down to the client.
To benefit from oriented object programming (OOP) concepts so the code will be minimal and readable.
To give the web-master the ability to choose witch tasks they want to use by using many modules instead of using one
giant program.
To afford good resources for those who want to learn javascript programming.
TO HAVE FUN: programming is fun, spend time for useful things, happiness is when your work is helpful to others,
more obstacles give more experience.
Copyright (C) 2016-2021 Abdelkrime Aries
More
About the project
Contributors
License
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
|
# JsLingua

[](https://nodei.co/npm/jslingua/)
[](https://kariminf.github.io/jslingua.web)
[](http://www.apache.org/licenses/LICENSE-2.0)
[](https://travis-ci.com/kariminf/jslingua)
[](https://www.npmjs.com/package/jslingua)
Javascript library for language processing.
## Functionalities
For now, there are 4 modules : Info, Lang, Trans and Morpho.
### Information about the language (Info)
```shell
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> JsLingua.version
0.13.0
> let AraInfo = JsLingua.gserv('info', 'ara')
undefined
> AraInfo.gname()
'Arabic'
> AraInfo.goname()
'عربية'
> AraInfo.gdir()
'rtl'
> AraInfo.gfamily()
'Afro-asiatic'
> AraInfo.gorder()
'vso'
> AraInfo.gbranch()
'Semitic'
```
<!--

-->
### Basic language functions (Lang)
- Detection of language's characters
- Transforming numbers to strings (pronunciation)
```shell
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> let FraLang = JsLingua.gserv('lang', 'fra')
undefined
> FraLang.nbr2words(58912)
'cinquante-huit mille neuf cent douze'
> FraLang.lchars()
['BasicLatin', 'Latin-1Supplement']
> let verifyFcts = FraLang.gcharverify('BasicLatin')
undefined
> verifyFcts.every('un élève')
false
> verifyFcts.some('un élève')
true
> FraLang.schars('BasicLatin')
undefined
> FraLang.every('un élève')
false
> FraLang.some('un élève')
true
> FraLang.strans('min2maj')
undefined
> FraLang.trans('Un texte en minuscule : lowercase')
'UN TEXTE EN MINUSCULE : LOWERCASE'
> FraLang.strans('maj2min')
undefined
> FraLang.trans('UN TEXTE EN MAJUSCULE : uppercase')
'un texte en majuscule : uppercase'
>
```
<!--

-->
### Transliteration (Trans)
```shell
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> let JpnTrans = JsLingua.gserv('trans', 'jpn')
undefined
> JpnTrans.l()
[ 'hepburn', 'nihonshiki', 'kunreishiki', 'morse' ]
> JpnTrans.s('hepburn')
undefined
> JpnTrans.t('じゃ,しゃしん,いっぱい')
'ja,shashin,ippai'
> JpnTrans.u('ja,shashin,ippai')
'じゃ,しゃしん,いっぱい'
> JpnTrans.s('nihonshiki')
undefined
> JpnTrans.t('じゃ,しゃしん,いっぱい')
'zya,syasin,ippai'
> JpnTrans.u('zya,syasin,ippai')
'じゃ,しゃしん,いっぱい'
> JpnTrans.s('morse')
undefined
> JpnTrans.t('しゃしん')
'-..--- --.-. .-- --.-. .-.-. ...-.'
```
<!--

-->
### Morphology (Morpho)
Different morphological functions
- Verb Conjugation
- Stemming: deleting affixes (suffixes, prefixes and infixes)
- Noun declension: from feminine to masculine and the inverse, from singular to plural, etc.
- Text segmentation: text to sentences; sentences to words
- Text normalization
- Stop words filtering
```shell
Welcome to Node.js v14.17.0
Type ".help" for more information.
> let JsLingua = require('jslingua')
undefined
> let EngMorpho = JsLingua.gserv('morpho', 'eng')
undefined
> let forms = EngMorpho.lform()
undefined
> Object.keys(forms)
[
'pres', 'past',
'fut', 'pres_perf',
'past_perf', 'fut_perf',
'pres_cont', 'past_cont',
'fut_cont', 'pres_perf_cont',
'past_perf_cont', 'fut_perf_cont'
]
> let I = {person:'first', number:'singular'}
undefined
> Morpho.conj('go', Object.assign({}, forms['past_perf'], I))
'had gone'
> Morpho.goptname('Pronoun', I)
'I'
> EngMorpho.lconv()
['sing2pl']
> EngMorpho.sconv('sing2pl')
undefined
> EngMorpho.conv('ox')
'oxen'
> EngMorpho.lstem()
[ 'porter', 'lancaster']
> EngMorpho.sstem('porter')
undefined
> EngMorpho.stem('formative')
'form'
> EngMorpho.norm("ain't")
'is not'
> EngMorpho.gsents('Where is Dr. Whatson? I cannot see him with Mr. Charloc.')
[
'Where is Dr. Whatson',
'?',
'I cannot see him with Mr. Charloc',
'.'
]
> EngMorpho.gwords('Where is Dr. Whatson')
[ 'Where', 'is', 'Dr.', 'Watson']
>
```
<!--


-->
To get the list of available functionalities, check [FCT.md](./FCT.md)
To get tutorials [Click here](https://github.com/kariminf/jslingua_docs/blob/master/doc/index.md)
## How to use?
### Use in Browser
You can either use it by downloading it via **NPM** or by using **UNPKG** CDN (content delivery network).
There are two versions:
- One file containing all the functions at once (not very large): it is usefull if you want to use all the functions in your website. Also, if you don't want to use ES6 asynchronuous import.
- Many files : not recomanded since the files are not minimized.
Here an example using UNPKG CDN
```javascript
<script type="text/javascript" src="https://unpkg.com/jslingua@latest/dist/jslingua.js"></script>
<script type="text/javascript">
alert(JsLingua.version);
</script>
```
Here an example when you want to select the services at execution time.
```javascript
<script type="module">
import JsLingua from "../../src/jslingua.mjs";
window.JsLingua = JsLingua;
</script>
<script type="text/javascript">
async function loadingAsync(){
await Promise.all([
JsLingua.load("[Service]", "[lang]"),
JsLingua.load("[Service]", "[lang]"),
...
]);
loading();
}
window.onload = loadingAsync;
</script>
```
### Use in Node
First of all, you have to install the package in your current project
```
npm install jslingua
```
Then, you can import it as follows :
```javascript
let JsLingua = require("jslingua");
```
### Get the services (Browser & Node)
You can call them one by one, if you know the services and their implemented languages.
For example, if you want to use Arabic implementation of "Morpho":
```javascript
//Get Arabic Morpho class
let AraMorpho = JsLingua.gserv("morpho", "ara");
```
Or, you can just loop over the services and test available languages.
For example, the "Info" service:
```javascript
//Get the list of languages codes which support the Info service
let langIDs = JsLingua.llang("info");
//Or: let langIDs = JsLingua.llang("info"); //list languages
let result = "";
for (let i = 0; i < langIDs.length; i++){
let infoClass = JsLingua.gserv("info", langIDs[i]);
result += i + "- " + infoClass.getName() + "\n";
}
```
Check [More](#more) section for more tutorials.
## Community
All the C's are here:
* [CREDITS](./CREDITS.md) : List of contributors
* [CONTRIBUTING](./CONTRIBUTING.md) : How to contribute to this project
* [CODE OF CONDUCT](./CODE_OF_CONDUCT.md) : Some recommendations must be followed for a healthy development environment.
* [CODE CONVENTION](./CODE_CONVENTION.md) : Some rules to follow while coding
* [CHANGELOG](./CHANGELOG.md) : Changes in every version
If you are looking to have fun, you are welcome to contribute.
If you think this project must have a business plan, please feel free to refer to [this project (click)](https://github.com/kariminf/tnbp)
## More
You can test the browser version on [https://kariminf.github.io/jslingua.web](https://kariminf.github.io/jslingua.web)
You can test nodejs version online on [https://runkit.com/npm/jslingua](https://runkit.com/npm/jslingua)
jsdoc generated API is located in [https://kariminf.github.io/jslingua.web/docs/](https://kariminf.github.io/jslingua.web/docs/)
Examples on how to use JsLingua are located in [https://github.com/kariminf/jslingua_docs](https://github.com/kariminf/jslingua_docs)
A tutorial on how to use JsLingua is located in [https://github.com/kariminf/jslingua_docs/blob/master/doc/index.md](https://github.com/kariminf/jslingua_docs/blob/master/doc/index.md)
A Youtube tutorial for JsLingua and nodejs is located in this list: [https://www.youtube.com/watch?v=piAysG5W55A&list=PLMNbVokbNS0cIjZxF8AnmgDfmu3XXddeq](https://www.youtube.com/watch?v=piAysG5W55A&list=PLMNbVokbNS0cIjZxF8AnmgDfmu3XXddeq)
## About the project
This project aims to afford some of the tasks related to languages, such as: detecting charsets, some transformations (majuscule to minuscule), verb conjugation, etc.
There are a lot of projects like this such as: [NLTK](https://github.com/nltk/nltk) (python), [OpenNLP](https://github.com/apache/opennlp) (Java), etc.
But, mostly, they are server side and needs some configurations before being put in action.
A lot of tasks doesn't need many resources such as stemming, tokenization, transliteration, etc.
When we use these toolkits in a web application, the server will do all of these tasks.
Why not exploit the users machines to do such tasks, and gain some benefits:
* The server will be relieved to do some other serious tasks.
* The number of communications will drop, resulting in a faster respond time which leads to a better user experience.
* The amount of exchanged data may drop; this case is applicable when we want to send a big text, then we tokenize it, stem it and remove stop words. This will decrease the size of data to be sent.
* Easy to configure and to integrate into your web pages.
Also, it can be used in server side using [node.js](https://github.com/nodejs/node).
The project's ambitions are:
* To deliver the maximum language-related tasks with a minimum of resources pushed down to the client.
* To benefit from oriented object programming (OOP) concepts so the code will be minimal and readable.
* To give the web-master the ability to choose witch tasks they want to use by using many modules instead of using one giant program.
* To afford good resources for those who want to learn javascript programming.
* **TO HAVE FUN**: programming is fun, spend time for useful things, happiness is when your work is helpful to others, more obstacles give more experience.
## Contributors

## License
Copyright (C) 2016-2021 Abdelkrime Aries
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
[
"Multilinguality",
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/ikegami-yukino/jaconv
|
2016-04-02T06:40:10Z
|
Pure-Python Japanese character interconverter for Hiragana, Katakana, Hankaku, and Zenkaku
|
ikegami-yukino / jaconv
Public
Branches
Tags
Go to file
Go to file
Code
.github
docs
jaconv
.gitignore
.travis.…
CHAN…
LICEN…
MANIF…
READ…
READ…
setup.py
test_ja…
coverage
coverage
100%
100% python
python 2.7 | 3.4 | 3.5 | 3.6 | 3.7 | 3.8 | 3.9 | 3.10 | 3.11 | 3.12
2.7 | 3.4 | 3.5 | 3.6 | 3.7 | 3.8 | 3.9 | 3.10 | 3.11 | 3.12 pypi
pypi v0.4.0
v0.4.0
license
license MIT License
MIT License Downloads
Downloads 988k
988k
jaconv (Japanese Converter) is interconverter for Hiragana, Katakana, Hankaku (half-width
character) and Zenkaku (full-width character)
Japanese README is available.
About
Pure-Python Japanese character
interconverter for Hiragana,
Katakana, Hankaku, and Zenkaku
ikegami-yukino.github.io/jaconv/…
# transliteration # japanese-language
# text-processing # pure-python
# preprocessing # character-converter
# japanese-kana # julius
Readme
MIT license
Activity
314 stars
10 watching
28 forks
Report repository
Releases
4 tags
Sponsor this project
https://www.paypal.me/yukinoi
https://www.amazon.co.jp/hz/wis…
Learn more about GitHub Sponsors
Packages
No packages published
Contributors
11
Code
Issues
6
Pull requests
Actions
Projects
Security
Insights
jaconv
README
MIT license
$ pip install jaconv
See also document
Languages
Python 100.0%
INSTALLATION
USAGE
import jaconv
# Hiragana to Katakana
jaconv.hira2kata('ともえまみ')
# => 'トモエマミ'
# Hiragana to half-width Katakana
jaconv.hira2hkata('ともえまみ')
# => 'トモエマミ'
# Katakana to Hiragana
jaconv.kata2hira('巴マミ')
# => '巴まみ'
# half-width character to full-width character
# default parameters are followings: kana=True, ascii=False, digit=False
jaconv.h2z('ティロ・フィナーレ')
# => 'ティロ・フィナーレ'
# half-width character to full-width character
# but only ascii characters
jaconv.h2z('abc', kana=False, ascii=True, digit=False)
# => 'abc'
# half-width character to full-width character
# but only digit characters
jaconv.h2z('123', kana=False, ascii=False, digit=True)
# => '123'
# half-width character to full-width character
# except half-width Katakana
jaconv.h2z('アabc123', kana=False, digit=True, ascii=True)
# => 'アabc123'
# an alias of h2z
jaconv.hankaku2zenkaku('ティロ・フィナーレabc123')
# => 'ティロ・フィナーレabc123'
# full-width character to half-width character
# default parameters are followings: kana=True, ascii=False, digit=False
jaconv.z2h('ティロ・フィナーレ')
# => 'ティロ・フィナーレ'
# full-width character to half-width character
jaconv.normalize method expand unicodedata.normalize for Japanese language processing.
'〜' => 'ー'
'~' => 'ー'
"’" => "'"
'”'=> '"'
'“' => '``'
'―' => '-'
'‐' => '-'
'˗' => '-'
# but only ascii characters
jaconv.z2h('abc', kana=False, ascii=True, digit=False)
# => 'abc'
# full-width character to half-width character
# but only digit characters
jaconv.z2h('123', kana=False, ascii=False, digit=True)
# => '123'
# full-width character to half-width character
# except full-width Katakana
jaconv.z2h('アabc123', kana=False, digit=True, ascii=True)
# => 'アabc123'
# an alias of z2h
jaconv.zenkaku2hankaku('ティロ・フィナーレabc123')
# => 'ティロ・フィナーレabc123'
# normalize
jaconv.normalize('ティロ・フィナ〜レ', 'NFKC')
# => 'ティロ・フィナーレ'
# Hiragana to alphabet
jaconv.kana2alphabet('じゃぱん')
# => 'japan'
# Alphabet to Hiragana
jaconv.alphabet2kana('japan')
# => 'じゃぱん'
# Katakana to Alphabet
jaconv.kata2alphabet('ケツイ')
# => 'ketsui'
# Alphabet to Katakana
jaconv.alphabet2kata('namba')
# => 'ナンバ'
# Hiragana to Julius's phoneme format
jaconv.hiragana2julius('てんきすごくいいいいいい')
# => 't e N k i s u g o k u i:'
NOTE
'֊' => '-'
'‐' => '-'
'‑' => '-'
'‒' => '-'
'–' => '-'
'⁃' => '-'
'⁻' => '-'
'₋' => '-'
'−' => '-'
'﹣' => 'ー'
'-' => 'ー'
'—' => 'ー'
'―' => 'ー'
'━' => 'ー'
' '
'
'
|
jaconv
==========
|coveralls| |pyversion| |version| |license| |download|
jaconv (Japanese Converter) is interconverter for Hiragana, Katakana, Hankaku (half-width character) and Zenkaku (full-width character)
`Japanese README <https://github.com/ikegami-yukino/jaconv/blob/master/README_JP.rst>`_ is available.
INSTALLATION
==============
::
$ pip install jaconv
USAGE
============
See also `document <http://ikegami-yukino.github.io/jaconv/jaconv.html>`_
.. code:: python
import jaconv
# Hiragana to Katakana
jaconv.hira2kata('ともえまみ')
# => 'トモエマミ'
# Hiragana to half-width Katakana
jaconv.hira2hkata('ともえまみ')
# => 'トモエマミ'
# Katakana to Hiragana
jaconv.kata2hira('巴マミ')
# => '巴まみ'
# half-width character to full-width character
# default parameters are followings: kana=True, ascii=False, digit=False
jaconv.h2z('ティロ・フィナーレ')
# => 'ティロ・フィナーレ'
# half-width character to full-width character
# but only ascii characters
jaconv.h2z('abc', kana=False, ascii=True, digit=False)
# => 'abc'
# half-width character to full-width character
# but only digit characters
jaconv.h2z('123', kana=False, ascii=False, digit=True)
# => '123'
# half-width character to full-width character
# except half-width Katakana
jaconv.h2z('アabc123', kana=False, digit=True, ascii=True)
# => 'アabc123'
# an alias of h2z
jaconv.hankaku2zenkaku('ティロ・フィナーレabc123')
# => 'ティロ・フィナーレabc123'
# full-width character to half-width character
# default parameters are followings: kana=True, ascii=False, digit=False
jaconv.z2h('ティロ・フィナーレ')
# => 'ティロ・フィナーレ'
# full-width character to half-width character
# but only ascii characters
jaconv.z2h('abc', kana=False, ascii=True, digit=False)
# => 'abc'
# full-width character to half-width character
# but only digit characters
jaconv.z2h('123', kana=False, ascii=False, digit=True)
# => '123'
# full-width character to half-width character
# except full-width Katakana
jaconv.z2h('アabc123', kana=False, digit=True, ascii=True)
# => 'アabc123'
# an alias of z2h
jaconv.zenkaku2hankaku('ティロ・フィナーレabc123')
# => 'ティロ・フィナーレabc123'
# normalize
jaconv.normalize('ティロ・フィナ〜レ', 'NFKC')
# => 'ティロ・フィナーレ'
# Hiragana to alphabet
jaconv.kana2alphabet('じゃぱん')
# => 'japan'
# Alphabet to Hiragana
jaconv.alphabet2kana('japan')
# => 'じゃぱん'
# Katakana to Alphabet
jaconv.kata2alphabet('ケツイ')
# => 'ketsui'
# Alphabet to Katakana
jaconv.alphabet2kata('namba')
# => 'ナンバ'
# Hiragana to Julius's phoneme format
jaconv.hiragana2julius('てんきすごくいいいいいい')
# => 't e N k i s u g o k u i:'
NOTE
============
jaconv.normalize method expand unicodedata.normalize for Japanese language processing.
.. code::
'〜' => 'ー'
'~' => 'ー'
"’" => "'"
'”'=> '"'
'“' => '``'
'―' => '-'
'‐' => '-'
'˗' => '-'
'֊' => '-'
'‐' => '-'
'‑' => '-'
'‒' => '-'
'–' => '-'
'⁃' => '-'
'⁻' => '-'
'₋' => '-'
'−' => '-'
'﹣' => 'ー'
'-' => 'ー'
'—' => 'ー'
'―' => 'ー'
'━' => 'ー'
'─' => 'ー'
.. |coveralls| image:: https://coveralls.io/repos/ikegami-yukino/jaconv/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/ikegami-yukino/jaconv?branch=master
:alt: coveralls.io
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/jaconv.svg
.. |version| image:: https://img.shields.io/pypi/v/jaconv.svg
:target: http://pypi.python.org/pypi/jaconv/
:alt: latest version
.. |license| image:: https://img.shields.io/pypi/l/jaconv.svg
:target: http://pypi.python.org/pypi/jaconv/
:alt: license
.. |download| image:: https://static.pepy.tech/personalized-badge/neologdn?period=total&units=international_system&left_color=black&right_color=blue&left_text=Downloads
:target: https://pepy.tech/project/neologdn
:alt: download
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/nicolas-raoul/jakaroma
|
2016-04-11T09:21:38Z
|
Java library and command-line tool to transliterate Japanese kanji to romaji (Latin alphabet)
|
nicolas-raoul / jakaroma
Public
Branches
Tags
Go to file
Go to file
Code
src
.gitignore
LICEN…
READ…
build.sh
jakaro…
pom.xml
Java kanji/etc-to-romaji converter.
Jakaroma converts kanji and kana (katakana, hiragana)
to romaji (Latin alphabet), which can be useful to make
Japanese words more-or-less readable by readers who
can not read Japanese. Example usage: A map app
might want to convert strings such as "ハレクラニ沖縄"
to "Harekurani Okinawa" for users whose locale is not
the Japanese language. We hope that results are better
than nothing, but please note that many conversions are
not perfect. Pull requests welcome!
Make sure you add the dependency below to your
pom.xml before building your project.
About
Java library and command-line tool
to transliterate Japanese kanji to
romaji (Latin alphabet)
# java # converter # japanese # romaji # kana
# kanji # java-library # japanese-language
Readme
Apache-2.0 license
Activity
63 stars
4 watching
9 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
5
Languages
Java 99.0%
Shell 1.0%
Code
Issues
9
Pull requests
Actions
Projects
Wiki
Security
Ins
Jakaroma
<dependency>
<groupId>com.github.nicolas-
raoul</groupId>
<artifactId>jakaroma</artifactId>
README
Apache-2.0 license
Usage:
Build a single jar file with
or just
Or you can put it into your Maven project.
Powered by Kuromoji.
<version>1.0.0</version>
</dependency>
mvn clean compile assembly:single
$ ./jakaroma.sh 六本木ヒルズ森タワー
Roppongi Hiruzu Mori Tawa-
java -cp "target/jakaroma-1.0.0-jar-
with-dependencies.jar"
fr.free.nrw.jakaroma.Jakaroma 六本木ヒル
ズ森タワー
Roppongi Hiruzu Mori Tawa-
|
# Jakaroma
Java kanji/etc-to-romaji converter.
Jakaroma converts kanji and kana (katakana, hiragana) to romaji (Latin alphabet), which can be useful to make Japanese words more-or-less readable by readers who can not read Japanese. Example usage: A map app _might_ want to convert strings such as "ハレクラニ沖縄" to "Harekurani Okinawa" for users whose locale is not the Japanese language. We hope that results are _better than nothing_, but please note that many conversions are not perfect. Pull requests welcome!
Make sure you add the dependency below to your pom.xml before building your project.
```
<dependency>
<groupId>com.github.nicolas-raoul</groupId>
<artifactId>jakaroma</artifactId>
<version>1.0.0</version>
</dependency>
```
Usage:
Build a single jar file with
```
mvn clean compile assembly:single
```
```
$ ./jakaroma.sh 六本木ヒルズ森タワー
Roppongi Hiruzu Mori Tawa-
```
or just
```
java -cp "target/jakaroma-1.0.0-jar-with-dependencies.jar" fr.free.nrw.jakaroma.Jakaroma 六本木ヒルズ森タワー
Roppongi Hiruzu Mori Tawa-
```
Or you can put it into your Maven project.
Powered by [Kuromoji](https://github.com/atilika/kuromoji).
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/gemmarx/unicode-jp-rs
|
2016-05-21T07:39:48Z
|
A Rust library to convert Japanese Half-width-kana[半角カナ] and Wide-alphanumeric[全角英数] into normal ones
|
gemmarx / unicode-jp-rs
Public
Branches
Tags
Go to file
Go to file
Code
src
.gitignore
.travis.…
Cargo.l…
Cargo.…
LICEN…
readm…
build
build unknown
unknown crates.io
crates.io v0.4.0
v0.4.0 license
license MIT
MIT
Converters of troublesome characters included in Japanese texts.
Half-width-kana[半角カナ;HANKAKU KANA] -> normal Katakana
Wide-alphanumeric[全角英数;ZENKAKU EISU] <-> normal
ASCII
If you need canonicalization of texts including Japanese, consider to
use unicode_normalization crate at first. NFD, NFKD, NFC and
NFKC can be used. This crate, however, works with you if you are in
a niche such as a need of delicate control of Japanese characters
for a restrictive character terminal.
About
A Rust library to convert Japanese
Half-width-kana[半角カナ] and Wide-
alphanumeric[全角英数] into normal
ones
# rust # unicode # japanese # kana # zenkaku
# hankaku
Readme
MIT license
Activity
19 stars
2 watching
5 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
3
Languages
Rust 99.5%
Shell 0.5%
Code
Issues
Pull requests
2
Actions
Projects
Security
Insights
Unicode-JP (Rust)
README
MIT license
Japanese have two syllabary systems Hiragana and Katakana, and
Half-width-kana is another notation system of them. In the systems,
there are two combinable diacritical marks Voiced-sound-mark and
Semi-voiced-sound-mark. Unicode has three independent code
points for each of the marks. In addition to it, we often use special
style Latin alphabets and Arabic numbers called Wide-alphanumeric
in Japanese texts. This small utility converts these codes each
other.
API Reference
Cargo.toml
src/main.rs
wide2ascii(&str) -> String
convert Wide-alphanumeric into normal ASCII [A -> A]
ascii2wide(&str) -> String
convert normal ASCII characters into Wide-alphanumeric [A ->
A]
Example
[dependencies]
unicode-jp = "0.4.0"
extern crate kana;
use kana::*;
fn main() {
let s1 = "マツオ バショウ ア゚";
assert_eq!("マツオ バショウ ア ゚", half2kana(s1));
assert_eq!("マツオ バショウ ア゚", half2full(s1));
let s2 = "ひ゜ひ゛んは゛";
assert_eq!("ぴびんば", combine(s2));
assert_eq!("ひ ゚ひ ゙んは ゙", vsmark2combi(s2));
let s3 = "#&Rust-1.6!";
assert_eq!("#&Rust-1.6!", wide2ascii(s3));
}
Functions of kana crate:
half2full(&str) -> String
convert Half-width-kana into normal Katakana with diacritical
marks separated [ア゙パ -> ア゙パ]
This method is simple, but tends to cause troubles when
rendering. In such a case, use half2kana() or execute
vsmark2{full|half|combi} as post process.
half2kana(&str) -> String
convert Half-width-kana into normal Katakana with diacritical
marks combined [ア゙パ -> ア゙パ]
combine(&str) -> String
combine base characters and diacritical marks on
Hiragana/Katakana [がハ゜ -> がパ]
hira2kata(&str) -> String
convert Hiragana into Katakana [あ -> ア]
kata2hira(&str) -> String
convert Katakana into Hiragana [ア -> あ]
vsmark2full(&str) -> String
convert all separated Voiced-sound-marks into full-width style
"\u{309B}"
vsmark2half(&str) -> String
convert all separated Voiced-sound-marks into half-width style
"\u{FF9E}"
vsmark2combi(&str) -> String
convert all separated Voiced-sound-marks into
space+combining style "\u{20}\u{3099}"
nowidespace(&str) -> String
convert Wide-space into normal space [" " -> " "]
space2wide(&str) -> String
convert normal space into Wide-space [" " -> " "]
nowideyen(&str) -> String
convert Wide-yen into Half-width-yen ["¥" -> "¥"]
|
Unicode-JP (Rust)
----
[](https://travis-ci.org/gemmarx/unicode-jp-rs)
[](https://crates.io/crates/unicode-jp)
[](./LICENSE)
Converters of troublesome characters included in Japanese texts.
- Half-width-kana[半角カナ;HANKAKU KANA] -> normal Katakana
- Wide-alphanumeric[全角英数;ZENKAKU EISU] <-> normal ASCII
If you need canonicalization of texts including Japanese, consider to use [unicode_normalization](https://github.com/unicode-rs/unicode-normalization) crate at first.
NFD, NFKD, NFC and NFKC can be used.
This crate, however, works with you if you are in a niche such as a need of delicate control of Japanese characters for a restrictive character terminal.
Japanese have two syllabary systems Hiragana and Katakana, and Half-width-kana is another notation system of them.
In the systems, there are two combinable diacritical marks Voiced-sound-mark and Semi-voiced-sound-mark.
Unicode has three independent code points for each of the marks.
In addition to it, we often use special style Latin alphabets and Arabic numbers called Wide-alphanumeric in Japanese texts.
This small utility converts these codes each other.
[API Reference](https://gemmarx.github.io/unicode-jp-rs/doc/kana/index.html)
### Example
Cargo.toml
```toml
[dependencies]
unicode-jp = "0.4.0"
```
src/main.rs
```rust
extern crate kana;
use kana::*;
fn main() {
let s1 = "マツオ バショウ ア゚";
assert_eq!("マツオ バショウ ア ゚", half2kana(s1));
assert_eq!("マツオ バショウ ア゚", half2full(s1));
let s2 = "ひ゜ひ゛んは゛";
assert_eq!("ぴびんば", combine(s2));
assert_eq!("ひ ゚ひ ゙んは ゙", vsmark2combi(s2));
let s3 = "#&Rust-1.6!";
assert_eq!("#&Rust-1.6!", wide2ascii(s3));
}
```
### Functions of kana crate:
- wide2ascii(&str) -> String
convert Wide-alphanumeric into normal ASCII [A -> A]
- ascii2wide(&str) -> String
convert normal ASCII characters into Wide-alphanumeric [A -> A]
- half2full(&str) -> String
convert Half-width-kana into normal Katakana with diacritical marks separated [ア゙パ -> ア゙パ]
This method is simple, but tends to cause troubles when rendering.
In such a case, use half2kana() or execute vsmark2{full|half|combi} as post process.
- half2kana(&str) -> String
convert Half-width-kana into normal Katakana with diacritical marks combined [ア゙パ -> ア゙パ]
- combine(&str) -> String
combine base characters and diacritical marks on Hiragana/Katakana [がハ゜ -> がパ]
- hira2kata(&str) -> String
convert Hiragana into Katakana [あ -> ア]
- kata2hira(&str) -> String
convert Katakana into Hiragana [ア -> あ]
- vsmark2full(&str) -> String
convert all separated Voiced-sound-marks into full-width style "\u{309B}"
- vsmark2half(&str) -> String
convert all separated Voiced-sound-marks into half-width style "\u{FF9E}"
- vsmark2combi(&str) -> String
convert all separated Voiced-sound-marks into space+combining style "\u{20}\u{3099}"
- nowidespace(&str) -> String
convert Wide-space into normal space [" " -> " "]
- space2wide(&str) -> String
convert normal space into Wide-space [" " -> " "]
- nowideyen(&str) -> String
convert Wide-yen into Half-width-yen ["¥" -> "¥"]
- yen2wide(&str) -> String
convert Half-width-yen into Wide-yen ["¥" -> "¥"]
## TODO or NOT TODO
- Voiced-sound-marks -> no space combining style "\u{3099}"
- Half-width-kana <- normal Katakana
- (normal/wide)tilde <-> Wave-dash
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/andree-surya/moji4j
|
2016-07-11T00:50:52Z
|
A Java library to converts between Japanese Hiragana, Katakana, and Romaji scripts.
|
andree-surya / moji4j
Public
Branches
Tags
Go to file
Go to file
Code
script
src
.gitignore
READ…
pom.xml
Moji4J is an open source Java library to converts between
Japanese Hiragana, Katakana, and Romaji scripts.
Please add the following Maven dependency to your pom.xml :
About
A Java library to converts between
Japanese Hiragana, Katakana, and
Romaji scripts.
Readme
Activity
32 stars
1 watching
8 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Java 90.1%
Ruby 9.9%
Code
Issues
4
Pull requests
Actions
Projects
Security
Insights
Installation
<dependency>
<groupId>com.andree-surya</groupId>
<artifactId>moji4j</artifactId>
<version>1.0.0</version>
</dependency>
Romaji and Kana Conversion
MojiConverter converter = new MojiConverter();
converter.convertRomajiToHiragana("Hiragana");
// ひらがな
converter.convertRomajiToKatakana("Katakana");
// カタカナ
converter.convertKanaToRomaji("ひらがな"); //
hiragana
README
The romanization system adopted by this library is loosely
based on the modern Hepburn system, with adjustments for
cases that might causes ambiguity during conversion.
Cases
Romaji
Kana
Long vowels
sentaa
センター
ookayama
おおかやま
o+u vowels
toukyou
とうきょう
ousama
おうさま
Long consonants
kekka
けっか
kasetto
カセット
kotchi
こっち
Syllabic n
gunma
ぐんま
kan'i
かんい
shin'you
しんよう
Particles
he
へ
wo
を
Others
dzu
づ
converter.convertKanaToRomaji("カタカナ"); //
katakana
Romaji, Kana, and Kanji Detection
MojiDetector detector = new MojiDetector();
detector.hasKanji("まっ暗"); // true
detector.hasKanji("まっしろ"); // false
detector.hasKana("ウソ付き"); // true
detector.hasKana("東京"); // false
detector.hasRomaji("モデル XYZ"); // true
detector.hasRomaji("フルーツ"); // false
Romanization Convention
The romanization tables used in this library are derived from
Mojinizer, a Ruby library to converts between Japanese Kana
and Romaji. The data is modified to suit differences in algorithm
and romanization convention.
Acknowledgement
License
© Copyright 2016 Andree Surya
Licensed under the Apache License, Version 2.0
(the "License");
you may not use this file except in compliance
with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to
in writing, software
distributed under the License is distributed
on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
|
**Moji4J** is an open source Java library to converts between Japanese Hiragana, Katakana, and Romaji scripts.
## Installation
Please add the following Maven dependency to your `pom.xml`:
<dependency>
<groupId>com.andree-surya</groupId>
<artifactId>moji4j</artifactId>
<version>1.0.0</version>
</dependency>
## Romaji and Kana Conversion
MojiConverter converter = new MojiConverter();
converter.convertRomajiToHiragana("Hiragana"); // ひらがな
converter.convertRomajiToKatakana("Katakana"); // カタカナ
converter.convertKanaToRomaji("ひらがな"); // hiragana
converter.convertKanaToRomaji("カタカナ"); // katakana
## Romaji, Kana, and Kanji Detection
MojiDetector detector = new MojiDetector();
detector.hasKanji("まっ暗"); // true
detector.hasKanji("まっしろ"); // false
detector.hasKana("ウソ付き"); // true
detector.hasKana("東京"); // false
detector.hasRomaji("モデル XYZ"); // true
detector.hasRomaji("フルーツ"); // false
## Romanization Convention
The romanization system adopted by this library is loosely based on the [modern Hepburn system][1], with adjustments for cases that might causes ambiguity during conversion.
| Cases | Romaji | Kana
|-----------------|-----------|-----------
| Long vowels | sentaa | センター
| | ookayama | おおかやま
| *o+u* vowels | toukyou | とうきょう
| | ousama | おうさま
| Long consonants | kekka | けっか
| | kasetto | カセット
| | kotchi | こっち
| Syllabic *n* | gunma | ぐんま
| | kan'i | かんい
| | shin'you | しんよう
| Particles | he | へ
| | wo | を
| Others | dzu | づ
## Acknowledgement
The romanization tables used in this library are derived from [Mojinizer][2], a Ruby library to converts between Japanese Kana and Romaji. The data is modified to suit differences in algorithm and romanization convention.
## License
© Copyright 2016 Andree Surya
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
[1]: https://en.wikipedia.org/wiki/Hepburn_romanization
[2]: https://github.com/ikayzo/mojinizer
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/FooSoft/yomichan-import
|
2016-07-27T03:02:43Z
|
External dictionary importer for Yomichan.
|
FooSoft / yomichan-import
Public archive
Branches
Tags
Go to file
Go to file
Code
img
scripts
yomich…
yomich…
.gitignore
.gitmo…
LICEN…
READ…
comm…
daijirin.…
daijise…
epwing…
freque…
gakken…
go.mod
go.sum
jmdict.go
jmdict_…
jmdict_…
jmdict_…
jmdict_…
About
External dictionary importer for
Yomichan.
foosoft.net/projects/yomichan-i…
# translation # japanese # edict # epwing
# yomichan # enamdict
Readme
MIT license
Activity
81 stars
6 watching
23 forks
Report repository
Releases 1
yomichan-import-21.12.…
Latest
on Dec 15, 2021
Packages
No packages published
Contributors
7
Languages
Go 99.2%
Shell 0.8%
This repository has been archived by the owner on Feb 25, 2023. It is now read-only.
Code
Issues
14
Pull requests
Actions
Projects
Security
Insights
jmdict_…
jmdict_…
jmdict_…
jmnedi…
jmnedi…
jmnedi…
kanjidi…
kotowa…
koujien…
meikyo…
rikai.go
shoug…
structu…
wadai.go
Note: this project is no longer maintained. Please see
this post for more information.
Yomichan Import allows users of the Yomichan
extension to import custom dictionary files. It currently
supports the following formats:
JMdict XML
JMnedict XML
KANJIDIC2 XML
Rikai SQLite DB
EPWING:
Daijirin (三省堂 スーパー大辞林)
Daijisen (大辞泉)
Kenkyusha (研究社 新和英大辞典 第5版)
Yomichan Import
README
MIT license
Kotowaza (故事ことわざの辞典)
Meikyou (明鏡国語辞典)
Kojien (広辞苑第六版 • 付属資料)
Gakken (学研国語大辞典 • 古語辞典 • 故事こと
わざ辞典 • 学研漢和大字典)
Yomichan Import is being expanded to support other
EPWING dictionaries based on user demand. This is a
mostly non-technical (although laborious) process that
requires writing regular expressions and creating font
tables; volunteer contributions are welcome.
Follow the steps outlined below to import your custom
dictionary into Yomichan:
1. Download a pre-built binary for Linux, Mac OS X or
Windows from the project page.
2. Launch the yomichan-gtk executable after
extracting the entire archive (or yomichan from the
command line).
3. Specify the source path of the dictionary you wish
to convert.
4. Specify the target path of the dictionary ZIP archive
that you wish to create.
5. Press the button labeled Import dictionary... and
wait for processing to complete.
6. On the Yomichan options page, browse to the
dictionary ZIP archive file you created.
7. Wait for the import progress to complete before
closing the options page.
Installation and Usage
Notice: When converting EPWING dictionaries on
Windows, it is important that the dictionary path you
provide does not contain non-ASCII characters
(including Japanese characters). This problem is due to
the fact that the EPWING library used does not support
|
# Yomichan Import
*Note: this project is no longer maintained. Please see [this
post](https://foosoft.net/posts/sunsetting-the-yomichan-project/) for more information.*
Yomichan Import allows users of the [Yomichan](https://foosoft.net/projects/yomichan) extension to import custom
dictionary files. It currently supports the following formats:
* [JMdict XML](http://www.edrdg.org/jmdict/edict_doc.html)
* [JMnedict XML](http://www.edrdg.org/enamdict/enamdict_doc.html)
* [KANJIDIC2 XML](http://www.edrdg.org/kanjidic/kanjd2index.html)
* [Rikai SQLite DB](https://www.polarcloud.com/getrcx/)
* [EPWING](https://ja.wikipedia.org/wiki/EPWING):
* [Daijirin](https://en.wikipedia.org/wiki/Daijirin) (三省堂 スーパー大辞林)
* [Daijisen](https://en.wikipedia.org/wiki/Daijisen) (大辞泉)
* [Kenkyusha](https://en.wikipedia.org/wiki/Kenky%C5%ABsha%27s_New_Japanese-English_Dictionary) (研究社 新和英大辞典 第5版)
* [Kotowaza](http://www.web-nihongo.com/wn/dictionary/dic_21/d-index.html) (故事ことわざの辞典)
* [Meikyou](https://ja.wikipedia.org/wiki/%E6%98%8E%E9%8F%A1%E5%9B%BD%E8%AA%9E%E8%BE%9E%E5%85%B8) (明鏡国語辞典)
* [Kojien](https://ja.wikipedia.org/wiki/%E5%BA%83%E8%BE%9E%E8%8B%91) (広辞苑第六版 • 付属資料)
* [Gakken](https://ja.wikipedia.org/wiki/%E5%AD%A6%E7%A0%94%E3%83%9B%E3%83%BC%E3%83%AB%E3%83%87%E3%82%A3%E3%83%B3%E3%82%B0%E3%82%B9) (学研国語大辞典 • 古語辞典 • 故事ことわざ辞典 • 学研漢和大字典)
Yomichan Import is being expanded to support other EPWING dictionaries based on user demand. This is a mostly
non-technical (although laborious) process that requires writing regular expressions and creating font tables; volunteer
contributions are welcome.

## Installation and Usage
Follow the steps outlined below to import your custom dictionary into Yomichan:
1. Download a pre-built binary for Linux, Mac OS X or Windows from the [project
page](https://github.com/FooSoft/yomichan-import/releases).
2. Launch the `yomichan-gtk` executable after extracting the entire archive (or `yomichan` from the command line).
3. Specify the source path of the dictionary you wish to convert.
4. Specify the target path of the dictionary ZIP archive that you wish to create.
5. Press the button labeled *Import dictionary...* and wait for processing to complete.
6. On the Yomichan options page, browse to the dictionary ZIP archive file you created.
7. Wait for the import progress to complete before closing the options page.
**Notice**: When converting EPWING dictionaries on Windows, it is important that the dictionary path you provide does
not contain non-ASCII characters (including Japanese characters). This problem is due to the fact that the EPWING
library used does not support such paths. Attempts to convert dictionaries stored in paths containing illegal characters
may cause the conversion process to fail.
|
[
"Multilinguality"
] |
[
"Annotation and Dataset Development",
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/bangumi-data/bangumi-data
|
2016-09-04T15:12:22Z
|
Raw data for Japanese Anime
|
bangumi-data / bangumi-data
Public
5 Branches
300 Tags
Go to file
Go to file
Code
semantic-release-bot chore(release): 0.…
af6aa34 · last week
.github…
update: updat…
7 months ago
data
update: Auto …
last week
dist
chore(release…
last week
script
Update depe…
3 years ago
.editor…
Add Travis Ci …
8 years ago
.eslintrc
Update depe…
3 years ago
.gitattri…
First commit
8 years ago
.gitignore
update mgtv
6 years ago
.nvmrc
update: add s…
8 months ago
.releas…
update: relea…
7 months ago
.travis.…
implement v0.…
5 years ago
CONT…
清除无用链接…
5 months ago
READ…
fuck GFW
2 years ago
data.d.ts
Update data (…
4 months ago
packag…
update: relea…
7 months ago
packag…
update: updat…
7 months ago
About
Raw data for Japanese Anime
Readme
Activity
Custom properties
493 stars
15 watching
61 forks
Report repository
Releases 43
v0.3.155
Latest
last week
+ 42 releases
Packages
No packages published
Contributors
22
+ 8 contributors
Languages
JavaScript 100.0%
Code
Issues
6
Pull requests
Actions
Projects
Security
Insights
Bangumi Data
awesome
awesome
README
build
build
unknown
unknown
npm
npm
v0.3.155
v0.3.155
minzipped size
minzipped size
956.7 KB
956.7 KB
jsDelivr
jsDelivr
24.9K/month
24.9K/month
license
license
CC-BY-4.0
CC-BY-4.0
slack
slack
welcome
welcome
PRs
PRs
welcome
welcome
动画番组及其放送、资讯站点数据集合
也可通过 CDN https://unpkg.com/bangumi-
[email protected]/dist/data.json 获取 v0.3.x 的最新数据
https://github.com/bangumi-data/awesome
见 CONTRIBUTING.md
The data in this repo is available for use under a CC BY 4.0
license (http://creativecommons.org/licenses/by/4.0/). For
attribution just mention somewhere that the source is
bangumi-data. If you have any questions about using the
data for your project please contact us.
使用方法
npm install bangumi-data
const bangumiData = require('bangumi-data');
哪些项目在使用
帮助我们改进
License
|
# Bangumi Data [](https://github.com/bangumi-data/awesome)
[](https://travis-ci.org/bangumi-data/bangumi-data)
[](https://www.npmjs.com/package/bangumi-data)
[](https://bundlephobia.com/result?p=bangumi-data)
[](https://www.jsdelivr.com/package/npm/bangumi-data)
[](https://github.com/bangumi-data/bangumi-data#license)
[](https://bangumi-data.slack.com)
[](CONTRIBUTING.md)
动画番组及其放送、资讯站点数据集合
## 使用方法
```bash
npm install bangumi-data
```
```js
const bangumiData = require('bangumi-data');
```
也可通过 CDN `https://unpkg.com/[email protected]/dist/data.json` 获取 `v0.3.x` 的最新数据
## 哪些项目在使用
https://github.com/bangumi-data/awesome
## 帮助我们改进
见 [CONTRIBUTING.md](CONTRIBUTING.md)
## License
The data in this repo is available for use under a CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/). For attribution just mention somewhere that the source is bangumi-data. If you have any questions about using the data for your project please contact us.
|
[
"Structured Data in NLP"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/yoriyuki/nksnd
|
2016-10-07T20:28:38Z
|
New kana-kanji conversion engine
|
yoriyuki / nksnd
Public
Branches
Tags
Go to file
Go to file
Code
nksnd
tools
.gitignore
LICEN…
READ…
READ…
setup.py
Copyright (C) 2016, 2017: National Institute of Advanced Industrial Science and
Technology (AIST)
Copyright (C) 2016: Yoh Okuno
Partially based on neural_ime by Yoh Okuno licensed under the same license.
the program called nksndconv is installed.
About
New kana-kanji conversion engine
Readme
MIT license
Activity
26 stars
5 watching
0 forks
Report repository
Releases 1
Version 0.1.0 (鴻巣)
Latest
on Dec 5, 2017
Packages
No packages published
Languages
Python 97.4%
Shell 2.6%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
nksnd: Kana-kanji conversion engine
Installation
$ tar -xzvf nksnd-<version>.tar.gz
$ cd nksnd-<version>
$ python setup.py build
$ python setup.py install
Usage
README
MIT license
For one-line-per-sentence inputs
For S-expression API
$ nksndconv
$ nksndconv -m sexp
("best-path" "きょうはいいてんき")
(("今日" "きょう") ("は" "は") ("言" "い") ("い" "い") ("天気" "てんき"))
("list-candidates" (("今日" "きょう") ("は" "は")) "いい" 0)
(("唯々" "いい") ("井伊" "いい"))
|
# nksnd: オープンソースかな漢字変換
Copyright (C) 2016, 2017: National Institute of Advanced Industrial Science and Technology (AIST)
Copyright (C) 2016: Yoh Okuno
BCCWJパーサーは奥野陽さんの[neural_ime](https://github.com/yohokuno/neural_ime/blob/master/LICENSE)に由来します。ライセンスは同じです。
Pythonでフルスクラッチで書かれています。
## インストール
```shell
$ tar -xzvf nksnd-<version>.tar.gz
$ cd nksnd-<version>
$ python setup.py build
$ python setup.py install
```
## 使い方
`nksndconv`というコマンドがインストールされます。
平文による入出力
```shell
$ nksndconv
きょうはいいてんき
今日はいい天気
```
S式による入出力
```shell
$ nksndconv -m sexp
("best-path" "きょうはいいてんき")
(("今日" "きょう") ("は" "は") ("言" "い") ("い" "い") ("天気" "てんき"))
("list-candidates" (("今日" "きょう") ("は" "は")) "いい" 0)
(("唯々" "いい") ("井伊" "いい"))
```
|
[
"Language Models",
"Syntactic Text Processing"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/neocl/jamdict
|
2016-10-25T02:47:58Z
|
Python 3 library for manipulating Jim Breen's JMdict, KanjiDic2, JMnedict and kanji-radical mappings
|
neocl / jamdict
Public
Branches
Tags
Go to file
Go to file
Code
data
docs
jamdict
test
.gitignore
.gitmodules
LICENSE
MANIFEST.in
README.md
TODO.md
_config.yml
jamdict_demo.py
jamdol-flask.py
jmd
logging.json
release.sh
requirements.txt
run
setup.py
test.sh
Jamdict is a Python 3 library for manipulating Jim Breen's JMdict, KanjiDic2, JMnedict and kanji-radical mappings.
docs
docs passing
passing
Documentation: https://jamdict.readthedocs.io/
Support querying different Japanese language resources
Japanese-English dictionary JMDict
Kanji dictionary KanjiDic2
Kanji-radical and radical-kanji maps KRADFILE/RADKFILE
Japanese Proper Names Dictionary (JMnedict)
About
Python 3 library for manipulating
Jim Breen's JMdict, KanjiDic2,
JMnedict and kanji-radical
mappings
# python # dictionary # japanese
# python-library # kanji # japanese-language
# jmdict # japanese-study # kanjidic2
# japanese-dictionary # jamdict
Readme
MIT license
Activity
Custom properties
129 stars
4 watching
12 forks
Report repository
Releases 8
jamdict 0.1a11
Latest
on May 25, 2021
+ 7 releases
Packages
No packages published
Contributors
3
Languages
Python 99.8%
Shell 0.2%
Code
Issues
6
Pull requests
1
Actions
Projects
Wiki
Security
Insights
Jamdict
Main features
README
MIT license
Fast look up (dictionaries are stored in SQLite databases)
Command-line lookup tool (Example)
Contributors are welcome! 🙇. If you want to help, please see Contributing page.
Jamdict is used in Jamdict-web - a web-based free and open-source Japanese reading assistant software. Please try out the demo instance
online at:
https://jamdict.herokuapp.com/
There also is a demo Jamdict virtual machine online for trying out Jamdict Python code on Repl.it:
https://replit.com/@tuananhle/jamdict-demo
Jamdict & Jamdict database are both available on PyPI and can be installed using pip
To make sure that jamdict is configured properly, try to look up a word using command line
Try Jamdict out
Installation
pip install --upgrade jamdict jamdict-data
Sample jamdict Python code
from jamdict import Jamdict
jam = Jamdict()
# use wildcard matching to find anything starts with 食べ and ends with る
result = jam.lookup('食べ%る')
# print all word entries
for entry in result.entries:
print(entry)
# [id#1358280] たべる (食べる) : 1. to eat ((Ichidan verb|transitive verb)) 2. to live on (e.g. a salary)/to live off/t
# [id#1358300] たべすぎる (食べ過ぎる) : to overeat ((Ichidan verb|transitive verb))
# [id#1852290] たべつける (食べ付ける) : to be used to eating ((Ichidan verb|transitive verb))
# [id#2145280] たべはじめる (食べ始める) : to start eating ((Ichidan verb))
# [id#2449430] たべかける (食べ掛ける) : to start eating ((Ichidan verb))
# [id#2671010] たべなれる (食べ慣れる) : to be used to eating/to become used to eating/to be accustomed to eating/to acq
# [id#2765050] たべられる (食べられる) : 1. to be able to eat ((Ichidan verb|intransitive verb)) 2. to be edible/to be g
# [id#2795790] たべくらべる (食べ比べる) : to taste and compare several dishes (or foods) of the same type ((Ichidan ver
# [id#2807470] たべあわせる (食べ合わせる) : to eat together (various foods) ((Ichidan verb))
# print all related characters
for c in result.chars:
print(repr(c))
# 食:9:eat,food
# 喰:12:eat,drink,receive (a blow),(kokuji)
# 過:12:overdo,exceed,go beyond,error
# 付:5:adhere,attach,refer to,append
# 始:8:commence,begin
# 掛:11:hang,suspend,depend,arrive at,tax,pour
# 慣:14:accustomed,get used to,become experienced
# 比:4:compare,race,ratio,Philippines
# 合:6:fit,suit,join,0.1
Command line tools
python3 -m jamdict lookup 言語学
========================================
Found entries
========================================
Entry: 1264430 | Kj: 言語学 | Kn: げんごがく
--------------------
Jamdict has built-in support for KRAD/RADK (i.e. kanji-radical and radical-kanji mapping). The terminology of radicals/components used by
Jamdict can be different from else where.
A radical in Jamdict is a principal component, each character has only one radical.
A character may be decomposed into several writing components.
By default jamdict provides two maps:
jam.krad is a Python dict that maps characters to list of components.
jam.radk is a Python dict that maps each available components to a list of characters.
1. linguistics ((noun (common) (futsuumeishi)))
========================================
Found characters
========================================
Char: 言 | Strokes: 7
--------------------
Readings: yan2, eon, 언, Ngôn, Ngân, ゲン, ゴン, い.う, こと
Meanings: say, word
Char: 語 | Strokes: 14
--------------------
Readings: yu3, yu4, eo, 어, Ngữ, Ngứ, ゴ, かた.る, かた.らう
Meanings: word, speech, language
Char: 学 | Strokes: 8
--------------------
Readings: xue2, hag, 학, Học, ガク, まな.ぶ
Meanings: study, learning, science
No name was found.
Using KRAD/RADK mapping
# Find all writing components (often called "radicals") of the character 雲
print(jam.krad['雲'])
# ['一', '雨', '二', '厶']
# Find all characters with the component 鼎
chars = jam.radk['鼎']
print(chars)
# {'鼏', '鼒', '鼐', '鼎', '鼑'}
# look up the characters info
result = jam.lookup(''.join(chars))
for c in result.chars:
print(c, c.meanings())
# 鼏 ['cover of tripod cauldron']
# 鼒 ['large tripod cauldron with small']
# 鼐 ['incense tripod']
# 鼎 ['three legged kettle']
# 鼑 []
Finding name entities
# Find all names with 鈴木 inside
result = jam.lookup('%鈴木%')
for name in result.names:
print(name)
# [id#5025685] キューティーすずき (キューティー鈴木) : Kyu-ti- Suzuki (1969.10-) (full name of a particular person)
# [id#5064867] パパイヤすずき (パパイヤ鈴木) : Papaiya Suzuki (full name of a particular person)
# [id#5089076] ラジカルすずき (ラジカル鈴木) : Rajikaru Suzuki (full name of a particular person)
# [id#5259356] きつねざきすずきひなた (狐崎鈴木日向) : Kitsunezakisuzukihinata (place name)
# [id#5379158] こすずき (小鈴木) : Kosuzuki (family or surname)
# [id#5398812] かみすずき (上鈴木) : Kamisuzuki (family or surname)
# [id#5465787] かわすずき (川鈴木) : Kawasuzuki (family or surname)
# [id#5499409] おおすずき (大鈴木) : Oosuzuki (family or surname)
# [id#5711308] すすき (鈴木) : Susuki (family or surname)
# ...
Use exact matching for faster search.
Find the word 花火 by idseq (1194580)
Find an exact name 花火 by idseq (5170462)
See jamdict_demo.py and jamdict/tools.py for more information.
JMdict: http://edrdg.org/jmdict/edict_doc.html
kanjidic2: https://www.edrdg.org/wiki/index.php/KANJIDIC_Project
JMnedict: https://www.edrdg.org/enamdict/enamdict_doc.html
KRADFILE: http://www.edrdg.org/krad/kradinf.html
Le Tuan Anh (Maintainer)
alt-romes
Matteo Fumagalli
Reem Alghamdi
Techno-coder
Exact matching
>>> result = jam.lookup('id#1194580')
>>> print(result.names[0])
[id#1194580] はなび (花火) : fireworks ((noun (common) (futsuumeishi)))
>>> result = jam.lookup('id#5170462')
>>> print(result.names[0])
[id#5170462] はなび (花火) : Hanabi (female given name or forename)
Useful links
Contributors
|
# Jamdict
[Jamdict](https://github.com/neocl/jamdict) is a Python 3 library for manipulating Jim Breen's JMdict, KanjiDic2, JMnedict and kanji-radical mappings.
[](https://jamdict.readthedocs.io/)
**Documentation:** https://jamdict.readthedocs.io/
# Main features
* Support querying different Japanese language resources
- Japanese-English dictionary JMDict
- Kanji dictionary KanjiDic2
- Kanji-radical and radical-kanji maps KRADFILE/RADKFILE
- Japanese Proper Names Dictionary (JMnedict)
* Fast look up (dictionaries are stored in SQLite databases)
* Command-line lookup tool [(Example)](#command-line-tools)
[Contributors](#contributors) are welcome! üôá. If you want to help, please see [Contributing](https://jamdict.readthedocs.io/en/latest/contributing.html) page.
# Try Jamdict out
Jamdict is used in [Jamdict-web](https://jamdict.herokuapp.com/) - a web-based free and open-source Japanese reading assistant software.
Please try out the demo instance online at:
https://jamdict.herokuapp.com/
There also is a demo [Jamdict virtual machine](https://replit.com/@tuananhle/jamdict-demo) online for trying out Jamdict Python code on Repl.it:
https://replit.com/@tuananhle/jamdict-demo
# Installation
Jamdict & Jamdict database are both available on [PyPI](https://pypi.org/project/jamdict/) and can be installed using pip
```bash
pip install --upgrade jamdict jamdict-data
```
# Sample jamdict Python code
```python
from jamdict import Jamdict
jam = Jamdict()
# use wildcard matching to find anything starts with 食べ and ends with る
result = jam.lookup('食べ%る')
# print all word entries
for entry in result.entries:
print(entry)
# [id#1358280] たべる (食べる) : 1. to eat ((Ichidan verb|transitive verb)) 2. to live on (e.g. a salary)/to live off/to subsist on
# [id#1358300] たべすぎる (食べ過ぎる) : to overeat ((Ichidan verb|transitive verb))
# [id#1852290] たべつける (食べ付ける) : to be used to eating ((Ichidan verb|transitive verb))
# [id#2145280] たべはじめる (食べ始める) : to start eating ((Ichidan verb))
# [id#2449430] たべかける (食べ掛ける) : to start eating ((Ichidan verb))
# [id#2671010] たべなれる (食べ慣れる) : to be used to eating/to become used to eating/to be accustomed to eating/to acquire a taste for ((Ichidan verb))
# [id#2765050] たべられる (食べられる) : 1. to be able to eat ((Ichidan verb|intransitive verb)) 2. to be edible/to be good to eat ((pre-noun adjectival (rentaishi)))
# [id#2795790] たべくらべる (食べ比べる) : to taste and compare several dishes (or foods) of the same type ((Ichidan verb|transitive verb))
# [id#2807470] たべあわせる (食べ合わせる) : to eat together (various foods) ((Ichidan verb))
# print all related characters
for c in result.chars:
print(repr(c))
# 食:9:eat,food
# Âñ∞:12:eat,drink,receive (a blow),(kokuji)
# ÈÅé:12:overdo,exceed,go beyond,error
# 付:5:adhere,attach,refer to,append
# Âßã:8:commence,begin
# Êéõ:11:hang,suspend,depend,arrive at,tax,pour
# ÊÖ£:14:accustomed,get used to,become experienced
# ÊØî:4:compare,race,ratio,Philippines
# Âêà:6:fit,suit,join,0.1
```
## Command line tools
To make sure that jamdict is configured properly, try to look up a word using command line
```bash
python3 -m jamdict lookup 言語学
========================================
Found entries
========================================
Entry: 1264430 | Kj: 言語学 | Kn: げんごがく
--------------------
1. linguistics ((noun (common) (futsuumeishi)))
========================================
Found characters
========================================
Char: 言 | Strokes: 7
--------------------
Readings: yan2, eon, 언, Ngôn, Ngân, ゲン, ゴン, い.う, こと
Meanings: say, word
Char: 語 | Strokes: 14
--------------------
Readings: yu3, yu4, eo, 어, Ngữ, Ngứ, ゴ, かた.る, かた.らう
Meanings: word, speech, language
Char: 学 | Strokes: 8
--------------------
Readings: xue2, hag, 학, Học, ガク, まな.ぶ
Meanings: study, learning, science
No name was found.
```
## Using KRAD/RADK mapping
Jamdict has built-in support for KRAD/RADK (i.e. kanji-radical and radical-kanji mapping).
The terminology of radicals/components used by Jamdict can be different from else where.
- A radical in Jamdict is a principal component, each character has only one radical.
- A character may be decomposed into several writing components.
By default jamdict provides two maps:
- jam.krad is a Python dict that maps characters to list of components.
- jam.radk is a Python dict that maps each available components to a list of characters.
```python
# Find all writing components (often called "radicals") of the character Èõ≤
print(jam.krad['Èõ≤'])
# ['一', '雨', '二', '厶']
# Find all characters with the component 鼎
chars = jam.radk['鼎']
print(chars)
# {'鼏', '鼒', '鼐', '鼎', '鼑'}
# look up the characters info
result = jam.lookup(''.join(chars))
for c in result.chars:
print(c, c.meanings())
# 鼏 ['cover of tripod cauldron']
# 鼒 ['large tripod cauldron with small']
# 鼐 ['incense tripod']
# 鼎 ['three legged kettle']
# 鼑 []
```
## Finding name entities
```bash
# Find all names with 鈴木 inside
result = jam.lookup('%鈴木%')
for name in result.names:
print(name)
# [id#5025685] キューティーすずき (キューティー鈴木) : Kyu-ti- Suzuki (1969.10-) (full name of a particular person)
# [id#5064867] パパイヤすずき (パパイヤ鈴木) : Papaiya Suzuki (full name of a particular person)
# [id#5089076] ラジカルすずき (ラジカル鈴木) : Rajikaru Suzuki (full name of a particular person)
# [id#5259356] きつねざきすずきひなた (狐崎鈴木日向) : Kitsunezakisuzukihinata (place name)
# [id#5379158] こすずき (小鈴木) : Kosuzuki (family or surname)
# [id#5398812] かみすずき (上鈴木) : Kamisuzuki (family or surname)
# [id#5465787] かわすずき (川鈴木) : Kawasuzuki (family or surname)
# [id#5499409] おおすずき (大鈴木) : Oosuzuki (family or surname)
# [id#5711308] すすき (鈴木) : Susuki (family or surname)
# ...
```
## Exact matching
Use exact matching for faster search.
Find the word 花火 by idseq (1194580)
```python
>>> result = jam.lookup('id#1194580')
>>> print(result.names[0])
[id#1194580] はなび (花火) : fireworks ((noun (common) (futsuumeishi)))
```
Find an exact name 花火 by idseq (5170462)
```python
>>> result = jam.lookup('id#5170462')
>>> print(result.names[0])
[id#5170462] はなび (花火) : Hanabi (female given name or forename)
```
See `jamdict_demo.py` and `jamdict/tools.py` for more information.
# Useful links
* JMdict: [http://edrdg.org/jmdict/edict_doc.html](http://edrdg.org/jmdict/edict_doc.html)
* kanjidic2: [https://www.edrdg.org/wiki/index.php/KANJIDIC_Project](https://www.edrdg.org/wiki/index.php/KANJIDIC_Project)
* JMnedict: [https://www.edrdg.org/enamdict/enamdict_doc.html](https://www.edrdg.org/enamdict/enamdict_doc.html)
* KRADFILE: [http://www.edrdg.org/krad/kradinf.html](http://www.edrdg.org/krad/kradinf.html)
# Contributors
- [Le Tuan Anh](https://github.com/letuananh) (Maintainer)
- [alt-romes](https://github.com/alt-romes)
- [Matteo Fumagalli](https://github.com/matteofumagalli1275)
- [Reem Alghamdi](https://github.com/reem-codes)
- [Techno-coder](https://github.com/Techno-coder)
|
[
"Multilinguality"
] |
[
"Annotation and Dataset Development",
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/odashi/small_parallel_enja
|
2016-10-27T03:14:36Z
|
50k English-Japanese Parallel Corpus for Machine Translation Benchmark.
|
odashi / small_parallel_enja
Public
Branches
Tags
Go to file
Go to file
Code
READ…
dev.en
dev.ja
test.en
test.ja
train.en
train.e…
train.e…
train.e…
train.e…
train.e…
train.e…
train.e…
train.ja
train.ja…
train.ja…
train.ja…
train.ja…
train.ja…
train.ja…
train.ja…
About
50k English-Japanese Parallel
Corpus for Machine Translation
Benchmark.
Readme
Activity
92 stars
2 watching
14 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Roff 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
This directory includes a small parallel corpus for English-
Japanese translation task. These data are extracted from
TANAKA Corpus by filtering sentence length 4 to 16 words.
English sentences are tokenized using Stanford Tokenizer
and lowercased. Japanese sentences are tokenized using
KyTea.
All texts are encoded in UTF-8. Sentence separator is '\n'
and word separator is ' ' .
Attention: some English words have different tokenization
results from Stanford Tokenizer, e.g., "don't" -> "don" "'t",
which may came from preprocessing errors. Please take care
of using this dataset in token-level evaluation.
File
#sentences
#words
#vocabulary
train.en
50,000
391,047
6,634
- train.en.000
10,000
78,049
3,447
- train.en.001
10,000
78,223
3,418
- train.en.002
10,000
78,427
3,430
- train.en.003
10,000
78,118
3,402
- train.en.004
10,000
78,230
3,405
train.ja
50,000
565,618
8,774
- train.ja.000
10,000
113,209
4,181
- train.ja.001
10,000
112,852
4,102
- train.ja.002
10,000
113,044
4,105
- train.ja.003
10,000
113,346
4,183
- train.ja.004
10,000
113,167
4,174
small_parallel_enja: 50k
En/Ja Parallel Corpus for
Testing SMT Methods
Corpus Statistics
README
File
#sentences
#words
#vocabulary
dev.en
500
3,931
816
dev.ja
500
5,668
894
test.en
500
3,998
839
test.ja
500
5,635
884
|
small_parallel_enja: 50k En/Ja Parallel Corpus for Testing SMT Methods
======================================================================
This directory includes a small parallel corpus for English-Japanese
translation task. These data are extracted from
[TANAKA Corpus](http://www.edrdg.org/wiki/index.php/Tanaka_Corpus)
by filtering sentence length 4 to 16 words.
English sentences are tokenized using
[Stanford Tokenizer](http://nlp.stanford.edu/software/tokenizer.html)
and lowercased.
Japanese sentences are tokenized using [KyTea](http://www.phontron.com/kytea/).
All texts are encoded in UTF-8. Sentence separator is `'\n'` and word separator
is `' '`.
**Attention**: some English words have different tokenization results from Stanford Tokenizer,
e.g., "don't" -> "don" "'t", which may came from preprocessing errors.
Please take care of using this dataset in token-level evaluation.
Corpus Statistics
-----------------
| File | #sentences | #words | #vocabulary |
|:---------------|-----------:|--------:|------------:|
| train.en | 50,000 | 391,047 | 6,634 |
| - train.en.000 | 10,000 | 78,049 | 3,447 |
| - train.en.001 | 10,000 | 78,223 | 3,418 |
| - train.en.002 | 10,000 | 78,427 | 3,430 |
| - train.en.003 | 10,000 | 78,118 | 3,402 |
| - train.en.004 | 10,000 | 78,230 | 3,405 |
| train.ja | 50,000 | 565,618 | 8,774 |
| - train.ja.000 | 10,000 | 113,209 | 4,181 |
| - train.ja.001 | 10,000 | 112,852 | 4,102 |
| - train.ja.002 | 10,000 | 113,044 | 4,105 |
| - train.ja.003 | 10,000 | 113,346 | 4,183 |
| - train.ja.004 | 10,000 | 113,167 | 4,174 |
| dev.en | 500 | 3,931 | 816 |
| dev.ja | 500 | 5,668 | 894 |
| test.en | 500 | 3,998 | 839 |
| test.ja | 500 | 5,635 | 884 |
|
[
"Machine Translation",
"Multilinguality",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/musyoku/python-npylm
|
2016-12-18T11:58:15Z
|
ベイズ階層言語モデルによる教師なし形態素解析
|
musyoku / python-npylm
Public
Branches
Tags
Go to file
Go to file
Code
run
src
test
.gitignore
READ…
makefile
ベイズ階層言語モデルによる教師なし形態素解析の
C++実装です。
単語n-gramモデルは3-gramで固定です。2-gramは非対
応です。
現在も開発途中です。
実装について
ベイズ階層言語モデルによる教師なし形態素解析
Forward filtering-Backward samplingによる単語分
割でアンダーフローを防ぐ
文字列の単語ID化をハッシュで実装しているため、学習
結果を違うコンピュータで用いると正しく分割が行えな
い可能性があります。
About
ベイズ階層言語モデルによる教師な
し形態素解析
# npylm
Readme
Activity
33 stars
2 watching
6 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
C++ 94.1%
Python 5.2%
Makefile 0.7%
Code
Issues
4
Pull requests
Actions
Projects
Security
Insights
Nested Pitman-Yor
Language Model (NPYLM)
README
前向き確率の計算をlogsumexpからスケーリングに
変更
Boost
C++14
Python 3
macOSの場合、PythonとBoostはともにbrewでインス
トールする必要があります。
PYTHONPATH を変更する必要があるかもしれません。
Pythonのバージョンを自身のものと置き換えてくださ
い。
更新履歴
2017/11/21
動作環境
準備
macOS
Python 3のインストール
brew install python3
Boostのインストール
brew install boost-python --with-
python3
Ubuntu
Boostのインストール
./bootstrap.sh --with-python=python3 --
with-python-version=3.5
./b2 python=3.5 -d2 -j4 --prefix
YOUR_BOOST_DIR install
ビルド
以下のコマンドでnpylm.so が生成され、Pythonから利
用できるようになります。
makefile 内のBoostのパスを自身の環境に合わせて書
き換えてください。
Ubuntuでエラーが出る場合は代わりに以下を実行しま
す。
半教師あり学習をする場合は必要です。
run/unsupervised にコード例があります。
-file
学習に使うテキストファイル
-dir
学習に使うテキストファイル群が入っているデ
ィレクトリ
複数ファイルを用いる場合はこちらを指定
-split
読み込んだ行のうち何割を学習に用いるか
0から1の実数を指定
1を指定すると全データを用いてモデルを学習
する
-l
可能な単語の最大長
make install
make install_ubuntu
MeCabのインストール
pip install mecab-python3
学習(教師なし)
実行例
python3 train.py -split 0.9 -l 8 -file
YOUR_TEXTFILE
オプション
日本語なら8〜16、英語なら16〜20程度を指定
文の長さをN、単語の最大長をLとすると、
NPYLMの計算量はO(NL^3)になる
なお学習の再開はできません。
run/semi-supervised にコード例があります。
-file
学習に使うテキストファイル
-dir
学習に使うテキストファイル群が入っているデ
ィレクトリ
複数ファイルを用いる場合はこちらを指定
-train-split
読み込んだ行のうち何割を学習に用いるか
0から1の実数を指定
1を指定すると全データを用いてモデルを学習
する
-ssl-split
学習データのうち何割を教師データに用いるか
0から1の実数を指定
-l
可能な単語の最大長
日本語なら8〜16、英語なら16〜20程度を指定
文の長さをN、単語の最大長をLとすると、
NPYLMの計算量はO(NL^3)になる
なお学習の再開はできません。
分割結果をファイルに保存します。
学習(半教師あり)
実行例
python3 train.py -train-split 0.9 -ssl-
split 0.01 -l 8 -file YOUR_TEXTFILE
オプション
単語分割
-file
分割するテキストファイル
-dir
分割するテキストファイルが入っているディレ
クトリ
複数ファイルをまとめて分割する場合はこれを
指定
ファイルごとに個別の出力ファイルが作成され
ます
-out
出力フォルダ
研究以外の用途には使用できません。
https://twitter.com/daiti_m/status/851810748263157760
実装に誤りが含まれる可能性があります。
質問等、何かありましたらissueにてお知らせくださ
い。
現在、条件付確率場とベイズ階層言語モデルの統合によ
る半教師あり形態素解析と半教師あり形態素解析
NPYCRFの修正を実装しています。
アンダーフローが起きていないかを確認するためにわざ
と1行あたりの文字数を多くして学習させています。
15万行あったデータから改行を削除し4,770行に圧縮し
ました。
python viterbi.py -file YOUR_TEXTFILE
オプション
注意事項
展望
実行結果
けものフレンズ実況スレ
/ アニメ / に / セルリアン / 設定 / を / 出 / す / 必要
/ は / まったく / なかった / と思う / um / nk / は / 原
作 / と / 漫画 / で / 終わり方 / 変え / て / た / なあ /
原作 / 有り / だ / と / 色々 / 考え / ちゃ / う / けど /
オリジナル / だ / と / 余計な / 事 / 考え / なくてい
い / ね / 犬 / のフレンズ / だから / さ / 、 / ガキが
余計な / レスつけてくんな / って / 言って / る / だ
ろ / ジャパリパーク / 滅亡 / まで / あと / 1 / 4 / 日 /
間 / 。 / まだ / 作品 / は / 完結 / して / ない / し / こ
の / あと / 話 / に / どう / 関わって / くる / か / わか
ら / ない / と思う / んだが / 最終話 / まで / い / って
/ / そういう / こと / だった / のか / ー / ! / / って / な
/ った / 後で / 見返すと / また / 唸 / ら / され / そう
/ な / 場面 / が / 多い / ん / だよ / ね / お別れ / エン
ド / 想像 / する / だけで / 震え / て / くる / けど / ハ
ッピー / エンド / なら / 受け入れ / る / しかない /
よ / ね / この / スレ / は / 子供 / が / 多くて / 最終回
の / 感動 / も / 萎える / だ / ろ / う / 百合 / アニメ /
として / 楽しんで / る / 奴 / もいる / ん / だね / ぇ /
あー / 、 / それ / は / 誤 / り / だ / 。 / 低 / レアリテ
ィ / フレンズ / にスポットライト / が / 当て / られ /
て / る / こと / もある / 。 / 毒 / 抜いた / っていう /
のは / 伏線 / 回収 / は / だいたい / や / る / けど / も
/ 鬱 / な / 感じ / に / は / して / ない / ていう / 意味 /
だ / と / 解釈 / し / た / けど / 【 / ᓼ / 】 / 『けもの
フレンズ』 / 第 / 10話「ろっじ」 / より / 、 / 先行
/ 場面 / カット / が / 到着 / ! / 宿泊 / した / ロッジ /
に / 幽霊 / ……? / 【 / ᓼ / 】 / [無断転載禁止]©2ch.
/ net [ / 8 / 9 / 1 / 1 / 9 / 1 / 9 / 2 / 3 / ] / 実際のとこ
ろ / けもの / フレンズ / は / 百合 / なのか / ? / 例え
ば / どこが / 百合 / っぽい / ? / いや / 、 / ある / だ
ろ / 。 / セルリアン / が / いる / 事 / に / よって / 物
語 / に / 緊張感 / が / 生まれ / る / 。 / 伏線 / が / 結
構 / 大きい / 物 / な気がする / んだ / けど / これ /
あと / 2話 / で / 終わ / る / のか / なぁ / ? / もしかし
て / 「 / ジャパリ / パーク / の / 外に / 人間 / を / 探
しに / 行く / よ / ! / カバンちゃん / たち / の / 冒険
は / これから / だ!」 / エンド / じゃある / まい / な
/ それ / でも / あれ / は / 許容 / し / 難 / かった / と
おもう / ぞ / そもそも / 利 / 潤 / 第一 / でない / け
もの / プロジェクト / に / 売上 / で / 優劣 / 語 / る /
の / は / ナンセンス / や / ぞ / の / タイトル / も / い
い / な / カバンちゃん / 「 / さーばる / 島の外に /
出 / ちゃ / う / と / 記憶 / も / なくな / る / ん / だ /
よ / ? / 」 / さーばる / 「 / うん / 、 / それ / でも /
いい / よ / 。 / 」 / カバンちゃん / 「 / う / うん / 、
/ ダメ / 。 / ボク / が / のこ / る / よ / 。 / 」 / さー
ばる / 「 / え? / ・・・ / 」 / カバンちゃん / 「 / さ
ーばる / は / 大切な / 僕 / のフレンズ / だから / 」 /
わざわざ / ng / 宣言 / とか / キッズ / の / 代表 / み
たいな / も / ん / だろ / 出来 / の / 良い / おさ / らい
/ サイト / が / ある / よ / 俺 / も / そこ / で / 予習 /
し / た / タイム / アタック / 式 / の / クエスト / で /
低 / レア / 構成 / ボーナス / 入るから / 人権 / ない /
ってレベル / じゃ / ない / ぞ /
/ 予約 / でき / ない / のに / いつまで / 2巻 / は / ラ
ンキング / 1位 / なん / だ / よ / けもの / フレンズ /
、 / 縮めて / フレンズ / 。 / ジャパリパーク / の /
不思議 / な / 不思議な / 生き物 / 。 / 空 / に / 山 / に
/ 海 / に / フレンズ / は / いた / る / ところ / で / そ /
の / 姿 / を / 見 / る / こと / が / 出来 / る / 。この /
少女 / 、 / ヒト / と / 呼ばれ / る / かばん。 / 相棒 /
の / サーバル / と / 共に / バトル / & / ゲット / 。 /
フレンズの / 数だけ / の / 出会い / が / あり / フレ
ンズの / 数だけ / の / 別れ / がある / ( / 石塚運昇) /
ニコ動 / 調子 / 悪 / そう / なんだ / けど / 上映会 /
だ / いじょうぶ / か / ね / コスプレ / はよ / もう /
何度も / 書かれてる / だろうけど / 、 / 2話 / ed / で
/ 入 / って / きた / 身としては / ed / やっぱ / 良い /
。 / 歌 / も / 歌詞 / も / 合って / る / し / 、 / 普通の
/ アニメ / っぽく / ない / 廃墟 / に / した / の / も /
全部 / 良い / 。 / 情報 / が / 氾濫 / し / すぎ / て / 何
/ が / ネタバレ / か / さっぱり / 分からん / けど / 、
/ 1つ / ぐらい / は / 本物 / が / まじ / っ / て / そう /
だな / 。 / ま、 / 来週 / を / 楽しみ / に / して / る /
よ / 。 / アライさん / の / 「 / 困難は / 群れで分け
合え / 」 / って / 台詞 / もしかして / アプリ版 / で /
出て / きた / こと / あった / りする / ? / それ / なら
/ 記憶 / の / 引 / 継ぎ / 説 / は / ほぼ / 間違え / なさ /
そう / だ / けど / 神 / 展開 / で / ワロタ / これ / は /
地上波 / 流 / せ / ません / ね / ぇ / … / まあ、 / 数 /
打ちゃ当たる / 11話の / 展開 / は / 予想 / されて /
なかった / 気配 / だ / が / 汗 / まあ / ニコ動 / ラン
キング / に / あ / っ / た / アプリ版 / ストーリー /
見 / た / 時 / 点 / で / すでに / 覚悟はできてる / 一 /
人 / でも / いいから / 出 / して / ほしかった / … /
マジで / サーバル / / プレーリードッグ / / ヘラジカ
/ 殷周 / 伝説 / みたい / な / 肉 / マン / を / 想像 / し /
ちゃ / っ / た / じゃぱりまん / の / 中身 / で / 肉 / が
/ ある / なら / だ / が / 無い / から / 安心 /
/ 知識 / や / 性格 / まで / クローン / は / 無理 / だ /
と思う / nhkで / アニメ / 放送 / から / の / 紅白 / 出
場 / だと / 嫌な予感がする / 藤子不二 / 雄 / や / 大
友 / 克洋や / 鳥山明 / は / エール / を / 送 / られ / た
/ が、 / 自分 / が / 理解 / でき / ない / もの / が / 流
行 / っ / て / る / と / 怒 / った / らしい / な / 巨人の
星 / とか / ス / ポ / 根 / も / のは / 、 / どうして / こ
んな / もの / が / 人気 / なん / だ / って / アシスタン
ト / と / 担当 / に / 怒鳴り / 散らし / た / そう / な /
日本語 / で / しか / 伝わらない / 表現 / ある / もん /
ね / ひらがな / カタカナ / 漢字 / でも / ニュアンス /
使い / 分 / け / られ / る / し / また / 同時 / に / 英語
/ いい / なあ / と思う / 所 / もある / 親友 / ( / ?) / な
のに / そ / の / 欠片 / も / み / られ / ない / 猫 / の /
話 / は / やめ / なさい / … / なん / か / 、 / マズルの
/ ところ / が / カイ / ゼル / 髭 / みたい / で / 、 / か
わいい / っていうより / カッコイイ / とおもう / ん
/ だが / / 「 / ( / いや / 俺ら / に / 言われ / て / も / ) /
」 / って / 困惑 / する / 様子 / が / w / 藤子不二 / 雄
/ は / 貶 / そう / と / 思って / た / ら / 全力で / 尊敬 /
されて / 持ち / 上げ / て / くる / ので / 面倒見 / ざ
るを得な / かった / とか / いう / ホント / か / ウソ /
か / 分からない / 逸話 / すこ / 世界 / 最高峰 / の /
日本 / アニメ / を / 字幕 / 無し / で / 観 / られ / る /
幸せ / たぶん / そうな / ん / だろう / とは思う / が /
おいしい / とこ / だけ / 取ら / れ / て / 何も / やって
ない / 扱い / で / うん / ざ / り / して / そう / 結局 /
自分 / の / 好 / み / って事 / か / 本人 / に / しか / わ
から / ない / 先 / 駆 / 者 / として / の / 強烈な / 自負
/ と / 、 / 追い / 抜か / れ / る / 恐怖 / かわ / あった /
のだろう / と愚考 / した / 。 / スポーツ / 漫画や /
劇 / 画 / が / 流行 / っ / た / 時 / ノイローゼ / になり
/ かけ / た / と / 聞く / が / 、 / 90年代 / 以降 / の /
トーン / バリバリ / アニメ / 絵柄 / や / 萌え / 文化 /
とか / 見 / た / ら / どう / なってしまう / んだろう /
あと / サーバルちゃん / かわいい / コクボス / 「 /
ジカ / ン / ダヨ / 」 / 礼儀 / は / 守 / る / 人 / だろう
/ … / 内心 / 穏やか / ではない / が / ニコ動 / の再生
数 / なんて / まったく / 当て / に / ならん / ぞ / ニコ
生 / の / 来場者 / 数 / と / 有料 / 動画 / の / 再生数 /
は / 当て / に / して / いい / けど / 6話 / の / へいげ
ん / の / とき / / ヘラジカ / さん / たち / に / 「 / サ
ーバルキャット / のサーバル / だよ / 」って / 自己
紹介 / して / た / のが / なんか / 不思議 / だ / った /
な / 「 / 省略 / する / 」って / 文化 / ある / ん / だ / /
みたい / な / 一話 / の再生数 / で / 分かる / のは /
洗脳 / 度 / だけ / だ / な / すぐ / 解 / け / る / 洗脳 /
かも知れん / が / それ / 見たい / わ / めちゃくちゃ /
好循環 / じゃない / か / … / イッカク / クジラ / の /
イッカク / だ。 / って / 名乗 / って / る / キャラ / も
いた / し / 。 / いや~ / メンゴメンゴ / / あ / 、 / ボ
ス / さん / きゅー / w / アプリ版 / の / オオ / アルマ
ジロ / の / 声で / 絡んで / ほしかった / ( / cv / 相 /
沢 / 舞 / ) / 名前 / の / 概念 / 例えば / ヘラジカ / は /
種族 / として / 言って / る / のか / 名前 / と / して /
言って / る / のか / かばんちゃん / と / 名 / 付け / て
/ る / から / 概念 / として / は / 在 / る / ん / だろう /
けど / 果たして / クローン / は / 同じ / ポテンシャ
ル / を / 引き / 出 / せ / る / だろう / か / 。 / 藤子f /
という / か / ドラえもん / に / は / 完全 / 敗北 / を /
認め / て / た / らしい / から / ね / ドラえもん / を /
超え / る / キャラ / は / 作れ / ない / って / どうぶつ
/ スクープ / と / ダーウィン / と / wbc / どっち / を
優先 / すべき / か /
/ 捨て / た / り / しない / ん / じゃないかな / ? /
ま、 / ちょっと / (ry / ロッ / ソファンタズマ / は /
先代 / サーバル / の / 技 / って / 言いたい / のは / わ
かる / けど / 。 / かばんちゃん / 視点 / から / 見る
と / op / の / 映像 / の意味 / になる / と / いう / ダブ
ルミーニング / なの / かも知れない / これ / は / な
かなか / 面白い / 基本的に / ゲスト / フレンズって
/ カップル / にな / る / けど / この / 二人 / は / その
後 / どうなった / ん / だろう / ね / 杏子 / も / 野中 /
か / 優しく / て / 元気な / 女の子 / の / 唐突な / 涙 /
は / めちゃくちゃ / くる / もの / がある / な / ppp /
が / 歌 / って / る / こと / から / 考えると / あり / 得
/ る / ね / 、 / 宣伝 / 曲 / そして / まんま / と / よう
こそ / され / ました / なんだか / コケ / そう / な /
気がして / ならない / 12話 / だから / 話 / を / 膨ら
ま / せ / て / 伏線 / を / 回収 / して / って / の / が /
でき / た / けど / だら / っと / 続け / て / いく / と /
すると / ・・・ / たまに / 見 / る / 豆腐 / や / 冷奴 /
は / 何 / を / 示唆 / して / る / ん / だ / ? / 姿 / が / フ
レンズ / だと / 判別 / でき / ない / ん / じゃないか /
とりあえず / 初版 / は / 無理 / でも / 重版分 / 買お
う / 古典sf / 的に / タイ / ムスリップ / して / アプリ
版 / プレイヤー / として / ミライさん / の / 運命 /
を / 変え / に / 行く / とか / は / あり / そう /
/ すごーい / ぞ / おー / ! / が / 正解 / で / 世界遺 / 産!
/ が / 空耳 / おいおい / エヴァ / か / ? / 聖域 / への /
立ち / 入り / 認可 / 、 / 正体 / 不明 / な / 新規 / フレ
ンズ / の / 評価 / 、 / 困った / とき / の / 相談 / 役 /
色々 / やって / る / な / 輪廻転生 / して / 二人 / は /
一緒 / も / 人間 / は / 滅んで / て / かばんちゃん / は
/ サーバルちゃん / とずっと一緒 / も / ぶっちゃけ /
百合厨の願望 / だし / たつきが / そんな安直な / 設
定 / に / せ / ん / でしょ / ジャパリパーク / って / 時
間 / の / 進 / み / が / サンドスター / その他 / の / 影
響 / で / 物凄く / 早 / く / なって / る / から / 人工 /
物 / が / 朽ち / て / た / り / する / ん / か / な / 聞こ
え / ない / と / 言えば / ツチノコ / の / 「 / ピット
器官 / ! / 」 / って / 言 / っ / た / 後 / に / モ / ニャモ
ニャ / … / って / なんか / 言って / る / けど / なんて
/ 言って / る / のか / 未だに / 分からない / … / メ /
イ / ズ / ランナー / 的な / 人類滅亡 / とか / ちょっ
と / 似てる / し / 13話 / ネタバレ / 中尉 / か / つて /
の / ロッジ / で / は / フレンズと / 一 / 夜 / を / 共に
/ する / こと / が / でき / る / 人工 / 物 / は / ヒト /
が / 使わ / なくな / る / と / たった / 数年 / で / 朽ち
/ る / よ / 林 / 業 / 管理 / も / やって / る / から / な /
toki / o / のフレンズ / と / 言われ / て / も / 不思議 /
はない / 図書館 / に / 入 / り / 浸 / って / ゴロゴロ /
絵本 / 読んで / る / フレンズ / い / ない / かな / ピッ
ト器官 / ! / ・・・ / ・・・ / だとかでぇ、 / 俺には
/ 赤外線が見えるからな / ! / ( / ・ / ∀・) / 目 / が /
か / ゆい / … / ジャパリパーク / に / も / 花粉症 / は
/ ある / のか / な / サーバル / が / なぜ / ハシビロコ
ウ / だけ / ハシビロ / ちゃん / 呼び / なのか / が /
最大の / 謎 / 12話 / は / op / は / 一番 / 最後 / カバ
ンちゃん / と / サーバルちゃんが / 仲良く / ブラン
コ / に / 揺ら / れ / る / 絵 / が / 入 / る / よ / けもフ
レ / を / 細かく / 見 / て / ない / 間違って / セリフ /
覚え / て / る / アプリ / 時代 / の / 知識 / を / 間違っ
て / 捉え / て / る / って / いう / ので / 、 / 悶々と /
考察 / してる / 人 / の / 多 / い / こと / 多い / こと /
… / … /
2chから集めた884,158行の書き込みで学習を行いまし
た。
/ どことなく / 硬貨 / の / 気配 / が / ある / な /
2ch
/ 展開 / の / ヒント / と / 世界観 / を / 膨らま / せ /
る / ギミック / と / 物語 / の / 伏線 / を / 本当に / 勘
違い / して / る / 人 / は / い / ない / でしょ /
/ トラブっても / その / 前の / 状態 / に / 簡単 / に /
戻 / れ / る /
/ レシート / を / わ / た / さ / ない / 会社 / は / 100%
/ 脱税 / している /
/ すっきり / した / 。 / 実装 / 当時 / は / 2度と / や
りたく / 無い / と思った / けど /
/ 未だ / 趣味 / な / 個人 / 用途 / で / win / 10 / に / 頑
なに / 乗り換え / ない / ヤツ / なんて / 新しい / も
ん / に / 適応 / でき / ない / 老 / 化 / 始ま / っ / ちゃ
/ って / る / お / 人 / か /
/ 実家の / 猫 / がよくやる / けど / あんまり / 懐 / か
れ / て / る / 気がしない /
/ ラデ / の / ラインナップ / は / こう / いう / 噂 / の
ようだ。 /
/ ダメウォ / なんて / 殆ど / で / ねー / じゃねーか /
ど / アホ /
/ 新 / retina / 、 / 旧 / retina / が / 併売 / され / て /
る / 中 / で / 比較 / やら / 機種 / 選び / ごと / に / 別
/ スレ / 面倒 / だ / もん /
/ イオク / 出 / る / だけで / 不快 /
/ あの / まま / やってりゃ / ジュリア / の / 撃墜 / も
/ 時間の問題 / だ / っ / た / し /
/ も / し / 踊ら / され / て / た / ら / 面白 / さ / を /
感じ / られ / る / はず / だ /
/ 二連 / スレ建て / オ / ッ / ツ / オ / ッ / ツ /
/ の / ガチャ限 / 定運極化特別ルール / って / 何 / で
すか / ? /
/ 特に / その / 辺 / フォロー / ない / まま / あの / 状
況 / で / と / どめ / 刺 / し / 損 / ね / ました / で / 最
後 / まで / いく / の / は / な / ・・・ /
/ こうなると / 意外 / に / ツチノコ / が / ハードル /
低 / そう /
/ 強制 / アップデート / のたびに / 自分 / の / 使い方
/ にあわせた / 細かい / 設定 / を / 勝手に / 戻 / す /
だけ / で / なく /
/ マジか了 / 解した /
/ 今度 / は / mac / 使い / () / に / 乗り換え / た / ん /
だろう / が / ・・・ / 哀れ / よ / のぅ / / 今 / 後 / も /
ノエル / たくさん / 配 / って / くれ / る / なら / 問題
ない / けど /
/ マルチ / 魔窟 / 初めて / やった / けど / フレンド /
が / いい人 / で / 上手く / 出来 / た / わ /
/ 咲/ くん/ も/ 女/ の/ 子/ 声優/ が/ よか
た/
|
# Nested Pitman-Yor Language Model (NPYLM)
[ベイズ階層言語モデルによる教師なし形態素解析](http://chasen.org/~daiti-m/paper/nl190segment.pdf)のC++実装です。
単語n-gramモデルは3-gramで固定です。2-gramは非対応です。
現在も開発途中です。
実装について
- [ベイズ階層言語モデルによる教師なし形態素解析](http://musyoku.github.io/2016/12/14/%E3%83%99%E3%82%A4%E3%82%BA%E9%9A%8E%E5%B1%A4%E8%A8%80%E8%AA%9E%E3%83%A2%E3%83%87%E3%83%AB%E3%81%AB%E3%82%88%E3%82%8B%E6%95%99%E5%B8%AB%E3%81%AA%E3%81%97%E5%BD%A2%E6%85%8B%E7%B4%A0%E8%A7%A3%E6%9E%90/)
- [Forward filtering-Backward samplingによる単語分割でアンダーフローを防ぐ](http://musyoku.github.io/2017/04/15/forward-filtering-backward-sampling%E3%81%A7%E3%82%A2%E3%83%B3%E3%83%80%E3%83%BC%E3%83%95%E3%83%AD%E3%83%BC%E3%82%92%E9%98%B2%E3%81%90/)
文字列の単語ID化をハッシュで実装しているため、学習結果を違うコンピュータで用いると正しく分割が行えない可能性があります。
## 更新履歴
### 2017/11/21
- 前向き確率の計算をlogsumexpからスケーリングに変更
## 動作環境
- Boost
- C++14
- Python 3
## 準備
### macOS
macOSの場合、PythonとBoostはともにbrewでインストールする必要があります。
#### Python 3のインストール
```
brew install python3
```
`PYTHONPATH`を変更する必要があるかもしれません。
#### Boostのインストール
```
brew install boost-python --with-python3
```
### Ubuntu
#### Boostのインストール
```
./bootstrap.sh --with-python=python3 --with-python-version=3.5
./b2 python=3.5 -d2 -j4 --prefix YOUR_BOOST_DIR install
```
Pythonのバージョンを自身のものと置き換えてください。
### ビルド
以下のコマンドで`npylm.so`が生成され、Pythonから利用できるようになります。
```
make install
```
`makefile`内のBoostのパスを自身の環境に合わせて書き換えてください。
Ubuntuでエラーが出る場合は代わりに以下を実行します。
```
make install_ubuntu
```
### MeCabのインストール
半教師あり学習をする場合は必要です。
```
pip install mecab-python3
```
## 学習(教師なし)
`run/unsupervised`にコード例があります。
### 実行例
```
python3 train.py -split 0.9 -l 8 -file YOUR_TEXTFILE
```
### オプション
- -file
- 学習に使うテキストファイル
- -dir
- 学習に使うテキストファイル群が入っているディレクトリ
- 複数ファイルを用いる場合はこちらを指定
- -split
- 読み込んだ行のうち何割を学習に用いるか
- 0から1の実数を指定
- 1を指定すると全データを用いてモデルを学習する
- -l
- 可能な単語の最大長
- 日本語なら8〜16、英語なら16〜20程度を指定
- 文の長さをN、単語の最大長をLとすると、NPYLMの計算量はO(NL^3)になる
なお学習の再開はできません。
## 学習(半教師あり)
`run/semi-supervised`にコード例があります。
### 実行例
```
python3 train.py -train-split 0.9 -ssl-split 0.01 -l 8 -file YOUR_TEXTFILE
```
### オプション
- -file
- 学習に使うテキストファイル
- -dir
- 学習に使うテキストファイル群が入っているディレクトリ
- 複数ファイルを用いる場合はこちらを指定
- -train-split
- 読み込んだ行のうち何割を学習に用いるか
- 0から1の実数を指定
- 1を指定すると全データを用いてモデルを学習する
- -ssl-split
- 学習データのうち何割を教師データに用いるか
- 0から1の実数を指定
- -l
- 可能な単語の最大長
- 日本語なら8〜16、英語なら16〜20程度を指定
- 文の長さをN、単語の最大長をLとすると、NPYLMの計算量はO(NL^3)になる
なお学習の再開はできません。
## 単語分割
分割結果をファイルに保存します。
```
python viterbi.py -file YOUR_TEXTFILE
```
### オプション
- -file
- 分割するテキストファイル
- -dir
- 分割するテキストファイルが入っているディレクトリ
- 複数ファイルをまとめて分割する場合はこれを指定
- ファイルごとに個別の出力ファイルが作成されます
- -out
- 出力フォルダ
## 注意事項
研究以外の用途には使用できません。
https://twitter.com/daiti_m/status/851810748263157760
実装に誤りが含まれる可能性があります。
質問等、何かありましたらissueにてお知らせください。
## 展望
現在、[条件付確率場とベイズ階層言語モデルの統合による半教師あり形態素解析](http://chasen.org/~daiti-m/paper/nlp2011semiseg.pdf)と[半教師あり形態素解析NPYCRFの修正](http://www.anlp.jp/proceedings/annual_meeting/2016/pdf_dir/D6-3.pdf)を実装しています。
## 実行結果
#### けものフレンズ実況スレ
アンダーフローが起きていないかを確認するためにわざと1行あたりの文字数を多くして学習させています。
15万行あったデータから改行を削除し4,770行に圧縮しました。
> / アニメ / に / セルリアン / 設定 / を / 出 / す / 必要 / は / まったく / なかった / と思う / um / nk / は / 原作 / と / 漫画 / で / 終わり方 / 変え / て / た / なあ / 原作 / 有り / だ / と / 色々 / 考え / ちゃ / う / けど / オリジナル / だ / と / 余計な / 事 / 考え / なくていい / ね / 犬 / のフレンズ / だから / さ / 、 / ガキが余計な / レスつけてくんな / って / 言って / る / だろ / ジャパリパーク / 滅亡 / まで / あと / 1 / 4 / 日 / 間 / 。 / まだ / 作品 / は / 完結 / して / ない / し / この / あと / 話 / に / どう / 関わって / くる / か / わから / ない / と思う / んだが / 最終話 / まで / い / って / / そういう / こと / だった / のか / ー / ! / / って / な / った / 後で / 見返すと / また / 唸 / ら / され / そう / な / 場面 / が / 多い / ん / だよ / ね / お別れ / エンド / 想像 / する / だけで / 震え / て / くる / けど / ハッピー / エンド / なら / 受け入れ / る / しかない / よ / ね / この / スレ / は / 子供 / が / 多くて / 最終回の / 感動 / も / 萎える / だ / ろ / う / 百合 / アニメ / として / 楽しんで / る / 奴 / もいる / ん / だね / ぇ / あー / 、 / それ / は / 誤 / り / だ / 。 / 低 / レアリティ / フレンズ / にスポットライト / が / 当て / られ / て / る / こと / もある / 。 / 毒 / 抜いた / っていう / のは / 伏線 / 回収 / は / だいたい / や / る / けど / も / 鬱 / な / 感じ / に / は / して / ない / ていう / 意味 / だ / と / 解釈 / し / た / けど / 【 / � / 】 / 『けものフレンズ』 / 第 / 10話「ろっじ」 / より / 、 / 先行 / 場面 / カット / が / 到着 / ! / 宿泊 / した / ロッジ / に / 幽霊 / ……? / 【 / � / 】 / [無断転載禁止]©2ch. / net [ / 8 / 9 / 1 / 1 / 9 / 1 / 9 / 2 / 3 / ] / 実際のところ / けもの / フレンズ / は / 百合 / なのか / ? / 例えば / どこが / 百合 / っぽい / ? / いや / 、 / ある / だろ / 。 / セルリアン / が / いる / 事 / に / よって / 物語 / に / 緊張感 / が / 生まれ / る / 。 / 伏線 / が / 結構 / 大きい / 物 / な気がする / んだ / けど / これ / あと / 2話 / で / 終わ / る / のか / なぁ / ? / もしかして / 「 / ジャパリ / パーク / の / 外に / 人間 / を / 探しに / 行く / よ / ! / カバンちゃん / たち / の / 冒険は / これから / だ!」 / エンド / じゃある / まい / な / それ / でも / あれ / は / 許容 / し / 難 / かった / とおもう / ぞ / そもそも / 利 / 潤 / 第一 / でない / けもの / プロジェクト / に / 売上 / で / 優劣 / 語 / る / の / は / ナンセンス / や / ぞ / の / タイトル / も / いい / な / カバンちゃん / 「 / さーばる / 島の外に / 出 / ちゃ / う / と / 記憶 / も / なくな / る / ん / だ / よ / ? / 」 / さーばる / 「 / うん / 、 / それ / でも / いい / よ / 。 / 」 / カバンちゃん / 「 / う / うん / 、 / ダメ / 。 / ボク / が / のこ / る / よ / 。 / 」 / さーばる / 「 / え? / ・・・ / 」 / カバンちゃん / 「 / さーばる / は / 大切な / 僕 / のフレンズ / だから / 」 / わざわざ / ng / 宣言 / とか / キッズ / の / 代表 / みたいな / も / ん / だろ / 出来 / の / 良い / おさ / らい / サイト / が / ある / よ / 俺 / も / そこ / で / 予習 / し / た / タイム / アタック / 式 / の / クエスト / で / 低 / レア / 構成 / ボーナス / 入るから / 人権 / ない / ってレベル / じゃ / ない / ぞ /
> / 予約 / でき / ない / のに / いつまで / 2巻 / は / ランキング / 1位 / なん / だ / よ / けもの / フレンズ / 、 / 縮めて / フレンズ / 。 / ジャパリパーク / の / 不思議 / な / 不思議な / 生き物 / 。 / 空 / に / 山 / に / 海 / に / フレンズ / は / いた / る / ところ / で / そ / の / 姿 / を / 見 / る / こと / が / 出来 / る / 。この / 少女 / 、 / ヒト / と / 呼ばれ / る / かばん。 / 相棒 / の / サーバル / と / 共に / バトル / & / ゲット / 。 / フレンズの / 数だけ / の / 出会い / が / あり / フレンズの / 数だけ / の / 別れ / がある / ( / 石塚運昇) / ニコ動 / 調子 / 悪 / そう / なんだ / けど / 上映会 / だ / いじょうぶ / か / ね / コスプレ / はよ / もう / 何度も / 書かれてる / だろうけど / 、 / 2話 / ed / で / 入 / って / きた / 身としては / ed / やっぱ / 良い / 。 / 歌 / も / 歌詞 / も / 合って / る / し / 、 / 普通の / アニメ / っぽく / ない / 廃墟 / に / した / の / も / 全部 / 良い / 。 / 情報 / が / 氾濫 / し / すぎ / て / 何 / が / ネタバレ / か / さっぱり / 分からん / けど / 、 / 1つ / ぐらい / は / 本物 / が / まじ / っ / て / そう / だな / 。 / ま、 / 来週 / を / 楽しみ / に / して / る / よ / 。 / アライさん / の / 「 / 困難は / 群れで分け合え / 」 / って / 台詞 / もしかして / アプリ版 / で / 出て / きた / こと / あった / りする / ? / それ / なら / 記憶 / の / 引 / 継ぎ / 説 / は / ほぼ / 間違え / なさ / そう / だ / けど / 神 / 展開 / で / ワロタ / これ / は / 地上波 / 流 / せ / ません / ね / ぇ / … / まあ、 / 数 / 打ちゃ当たる / 11話の / 展開 / は / 予想 / されて / なかった / 気配 / だ / が / 汗 / まあ / ニコ動 / ランキング / に / あ / っ / た / アプリ版 / ストーリー / 見 / た / 時 / 点 / で / すでに / 覚悟はできてる / 一 / 人 / でも / いいから / 出 / して / ほしかった / … / マジで / サーバル / / プレーリードッグ / / ヘラジカ / 殷周 / 伝説 / みたい / な / 肉 / マン / を / 想像 / し / ちゃ / っ / た / じゃぱりまん / の / 中身 / で / 肉 / が / ある / なら / だ / が / 無い / から / 安心 /
> / 知識 / や / 性格 / まで / クローン / は / 無理 / だ / と思う / nhkで / アニメ / 放送 / から / の / 紅白 / 出場 / だと / 嫌な予感がする / 藤子不二 / 雄 / や / 大友 / 克洋や / 鳥山明 / は / エール / を / 送 / られ / た / が、 / 自分 / が / 理解 / でき / ない / もの / が / 流行 / っ / て / る / と / 怒 / った / らしい / な / 巨人の星 / とか / ス / ポ / 根 / も / のは / 、 / どうして / こんな / もの / が / 人気 / なん / だ / って / アシスタント / と / 担当 / に / 怒鳴り / 散らし / た / そう / な / 日本語 / で / しか / 伝わらない / 表現 / ある / もん / ね / ひらがな / カタカナ / 漢字 / でも / ニュアンス / 使い / 分 / け / られ / る / し / また / 同時 / に / 英語 / いい / なあ / と思う / 所 / もある / 親友 / ( / ?) / なのに / そ / の / 欠片 / も / み / られ / ない / 猫 / の / 話 / は / やめ / なさい / … / なん / か / 、 / マズルの / ところ / が / カイ / ゼル / 髭 / みたい / で / 、 / かわいい / っていうより / カッコイイ / とおもう / ん / だが / / 「 / ( / いや / 俺ら / に / 言われ / て / も / ) / 」 / って / 困惑 / する / 様子 / が / w / 藤子不二 / 雄 / は / 貶 / そう / と / 思って / た / ら / 全力で / 尊敬 / されて / 持ち / 上げ / て / くる / ので / 面倒見 / ざるを得な / かった / とか / いう / ホント / か / ウソ / か / 分からない / 逸話 / すこ / 世界 / 最高峰 / の / 日本 / アニメ / を / 字幕 / 無し / で / 観 / られ / る / 幸せ / たぶん / そうな / ん / だろう / とは思う / が / おいしい / とこ / だけ / 取ら / れ / て / 何も / やってない / 扱い / で / うん / ざ / り / して / そう / 結局 / 自分 / の / 好 / み / って事 / か / 本人 / に / しか / わから / ない / 先 / 駆 / 者 / として / の / 強烈な / 自負 / と / 、 / 追い / 抜か / れ / る / 恐怖 / かわ / あった / のだろう / と愚考 / した / 。 / スポーツ / 漫画や / 劇 / 画 / が / 流行 / っ / た / 時 / ノイローゼ / になり / かけ / た / と / 聞く / が / 、 / 90年代 / 以降 / の / トーン / バリバリ / アニメ / 絵柄 / や / 萌え / 文化 / とか / 見 / た / ら / どう / なってしまう / んだろう / あと / サーバルちゃん / かわいい / コクボス / 「 / ジカ / ン / ダヨ / 」 / 礼儀 / は / 守 / る / 人 / だろう / … / 内心 / 穏やか / ではない / が / ニコ動 / の再生数 / なんて / まったく / 当て / に / ならん / ぞ / ニコ生 / の / 来場者 / 数 / と / 有料 / 動画 / の / 再生数 / は / 当て / に / して / いい / けど / 6話 / の / へいげん / の / とき / / ヘラジカ / さん / たち / に / 「 / サーバルキャット / のサーバル / だよ / 」って / 自己紹介 / して / た / のが / なんか / 不思議 / だ / った / な / 「 / 省略 / する / 」って / 文化 / ある / ん / だ / / みたい / な / 一話 / の再生数 / で / 分かる / のは / 洗脳 / 度 / だけ / だ / な / すぐ / 解 / け / る / 洗脳 / かも知れん / が / それ / 見たい / わ / めちゃくちゃ / 好循環 / じゃない / か / … / イッカク / クジラ / の / イッカク / だ。 / って / 名乗 / って / る / キャラ / もいた / し / 。 / いや~ / メンゴメンゴ / / あ / 、 / ボス / さん / きゅー / w / アプリ版 / の / オオ / アルマジロ / の / 声で / 絡んで / ほしかった / ( / cv / 相 / 沢 / 舞 / ) / 名前 / の / 概念 / 例えば / ヘラジカ / は / 種族 / として / 言って / る / のか / 名前 / と / して / 言って / る / のか / かばんちゃん / と / 名 / 付け / て / る / から / 概念 / として / は / 在 / る / ん / だろう / けど / 果たして / クローン / は / 同じ / ポテンシャル / を / 引き / 出 / せ / る / だろう / か / 。 / 藤子f / という / か / ドラえもん / に / は / 完全 / 敗北 / を / 認め / て / た / らしい / から / ね / ドラえもん / を / 超え / る / キャラ / は / 作れ / ない / って / どうぶつ / スクープ / と / ダーウィン / と / wbc / どっち / を優先 / すべき / か /
> / 捨て / た / り / しない / ん / じゃないかな / ? / ま、 / ちょっと / (ry / ロッ / ソファンタズマ / は / 先代 / サーバル / の / 技 / って / 言いたい / のは / わかる / けど / 。 / かばんちゃん / 視点 / から / 見ると / op / の / 映像 / の意味 / になる / と / いう / ダブルミーニング / なの / かも知れない / これ / は / なかなか / 面白い / 基本的に / ゲスト / フレンズって / カップル / にな / る / けど / この / 二人 / は / その後 / どうなった / ん / だろう / ね / 杏子 / も / 野中 / か / 優しく / て / 元気な / 女の子 / の / 唐突な / 涙 / は / めちゃくちゃ / くる / もの / がある / な / ppp / が / 歌 / って / る / こと / から / 考えると / あり / 得 / る / ね / 、 / 宣伝 / 曲 / そして / まんま / と / ようこそ / され / ました / なんだか / コケ / そう / な / 気がして / ならない / 12話 / だから / 話 / を / 膨らま / せ / て / 伏線 / を / 回収 / して / って / の / が / でき / た / けど / だら / っと / 続け / て / いく / と / すると / ・・・ / たまに / 見 / る / 豆腐 / や / 冷奴 / は / 何 / を / 示唆 / して / る / ん / だ / ? / 姿 / が / フレンズ / だと / 判別 / でき / ない / ん / じゃないか / とりあえず / 初版 / は / 無理 / でも / 重版分 / 買おう / 古典sf / 的に / タイ / ムスリップ / して / アプリ版 / プレイヤー / として / ミライさん / の / 運命 / を / 変え / に / 行く / とか / は / あり / そう /
> / すごーい / ぞ / おー / ! / が / 正解 / で / 世界遺 / 産! / が / 空耳 / おいおい / エヴァ / か / ? / 聖域 / への / 立ち / 入り / 認可 / 、 / 正体 / 不明 / な / 新規 / フレンズ / の / 評価 / 、 / 困った / とき / の / 相談 / 役 / 色々 / やって / る / な / 輪廻転生 / して / 二人 / は / 一緒 / も / 人間 / は / 滅んで / て / かばんちゃん / は / サーバルちゃん / とずっと一緒 / も / ぶっちゃけ / 百合厨の願望 / だし / たつきが / そんな安直な / 設定 / に / せ / ん / でしょ / ジャパリパーク / って / 時間 / の / 進 / み / が / サンドスター / その他 / の / 影響 / で / 物凄く / 早 / く / なって / る / から / 人工 / 物 / が / 朽ち / て / た / り / する / ん / か / な / 聞こえ / ない / と / 言えば / ツチノコ / の / 「 / ピット器官 / ! / 」 / って / 言 / っ / た / 後 / に / モ / ニャモニャ / … / って / なんか / 言って / る / けど / なんて / 言って / る / のか / 未だに / 分からない / … / メ / イ / ズ / ランナー / 的な / 人類滅亡 / とか / ちょっと / 似てる / し / 13話 / ネタバレ / 中尉 / か / つて / の / ロッジ / で / は / フレンズと / 一 / 夜 / を / 共に / する / こと / が / でき / る / 人工 / 物 / は / ヒト / が / 使わ / なくな / る / と / たった / 数年 / で / 朽ち / る / よ / 林 / 業 / 管理 / も / やって / る / から / な / toki / o / のフレンズ / と / 言われ / て / も / 不思議 / はない / 図書館 / に / 入 / り / 浸 / って / ゴロゴロ / 絵本 / 読んで / る / フレンズ / い / ない / かな / ピット器官 / ! / ・・・ / ・・・ / だとかでぇ、 / 俺には / 赤外線が見えるからな / ! / ( / ・ / ∀・) / 目 / が / か / ゆい / … / ジャパリパーク / に / も / 花粉症 / は / ある / のか / な / サーバル / が / なぜ / ハシビロコウ / だけ / ハシビロ / ちゃん / 呼び / なのか / が / 最大の / 謎 / 12話 / は / op / は / 一番 / 最後 / カバンちゃん / と / サーバルちゃんが / 仲良く / ブランコ / に / 揺ら / れ / る / 絵 / が / 入 / る / よ / けもフレ / を / 細かく / 見 / て / ない / 間違って / セリフ / 覚え / て / る / アプリ / 時代 / の / 知識 / を / 間違って / 捉え / て / る / って / いう / ので / 、 / 悶々と / 考察 / してる / 人 / の / 多 / い / こと / 多い / こと / … / … /
#### 2ch
2chから集めた884,158行の書き込みで学習を行いました。
> / どことなく / 硬貨 / の / 気配 / が / ある / な /
> / 展開 / の / ヒント / と / 世界観 / を / 膨らま / せ / る / ギミック / と / 物語 / の / 伏線 / を / 本当に / 勘違い / して / る / 人 / は / い / ない / でしょ /
> / トラブっても / その / 前の / 状態 / に / 簡単 / に / 戻 / れ / る /
> / レシート / を / わ / た / さ / ない / 会社 / は / 100% / 脱税 / している /
> / すっきり / した / 。 / 実装 / 当時 / は / 2度と / やりたく / 無い / と思った / けど /
> / 未だ / 趣味 / な / 個人 / 用途 / で / win / 10 / に / 頑なに / 乗り換え / ない / ヤツ / なんて / 新しい / もん / に / 適応 / でき / ない / 老 / 化 / 始ま / っ / ちゃ / って / る / お / 人 / か /
> / 実家の / 猫 / がよくやる / けど / あんまり / 懐 / かれ / て / る / 気がしない /
> / ラデ / の / ラインナップ / は / こう / いう / 噂 / のようだ。 /
> / ダメウォ / なんて / 殆ど / で / ねー / じゃねーか / ど / アホ /
> / 新 / retina / 、 / 旧 / retina / が / 併売 / され / て / る / 中 / で / 比較 / やら / 機種 / 選び / ごと / に / 別 / スレ / 面倒 / だ / もん /
> / イオク / 出 / る / だけで / 不快 /
> / あの / まま / やってりゃ / ジュリア / の / 撃墜 / も / 時間の問題 / だ / っ / た / し /
> / も / し / 踊ら / され / て / た / ら / 面白 / さ / を / 感じ / られ / る / はず / だ /
> / 二連 / スレ建て / オ / ッ / ツ / オ / ッ / ツ /
> / の / ガチャ限 / 定運極化特別ルール / って / 何 / ですか / ? /
> / 特に / その / 辺 / フォロー / ない / まま / あの / 状況 / で / と / どめ / 刺 / し / 損 / ね / ました / で / 最後 / まで / いく / の / は / な / ・・・ /
> / こうなると / 意外 / に / ツチノコ / が / ハードル / 低 / そう /
> / 強制 / アップデート / のたびに / 自分 / の / 使い方 / にあわせた / 細かい / 設定 / を / 勝手に / 戻 / す / だけ / で / なく /
> / マジか了 / 解した /
> / 今度 / は / mac / 使い / () / に / 乗り換え / た / ん / だろう / が / ・・・ / 哀れ / よ / のぅ /
/ 今 / 後 / も / ノエル / たくさん / 配 / って / くれ / る / なら / 問題ない / けど /
> / マルチ / 魔窟 / 初めて / やった / けど / フレンド / が / いい人 / で / 上手く / 出来 / た / わ /
> / 咲 / くん / も / 女 / の / 子 / 声優 / が / よかった /
> / 確かに / 少し / づつ / エンジン / かか / っ / て / き / た / 感じ / が / する / な /
> / くっそ / 、 / まず / ラファエル / が / 出 / ねえ /
> / 第六 / 世代 / cpu / で / 組 / もう / と思って / る / けど / win10 / 買 / う / の / は / は / 待った / 方が / いい / のか / な / これ / … / (´・ω・`) /
> / 移動 / 先 / で / ある程度 / なんで / も / で / き / る / mbp / は / 本当に / いい / 製品 / だ / と思います /
> / いや / 俺 / は / そこ / が / 好き /
> / と言えば / ギャラホルン / 崩壊 / しそう /
> / オオクニ欲 / しかった /
|
[
"Morphology",
"Syntactic Text Processing",
"Text Segmentation"
] |
[] |
true |
https://github.com/Kyubyong/neural_japanese_transliterator
|
2017-01-01T04:20:54Z
|
Can neural networks transliterate Romaji into Japanese correctly?
|
Kyubyong / neural_japanese_transliterator
Public
Branches
Tags
Go to file
Go to file
Code
data
images
results
.gitignore
LICEN…
READ…
annota…
data_l…
eval.py
hyperp…
modul…
networ…
prepro.py
train.py
utils.py
About
Can neural networks transliterate
Romaji into Japanese correctly?
# language # japanese # transliteration
Readme
Apache-2.0 license
Activity
173 stars
9 watching
17 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
2
Pull requests
Actions
Projects
Security
Insights
Neural Japanese Transliteration—can you
do better than SwiftKey™ Keyboard?
README
Apache-2.0 license
In this project, I examine how well neural networks can convert Roman letters into the
Japanese script, i.e., Hiragana, Katakana, or Kanji. The accuracy evaluation results for
896 Japanese test sentences outperform the SwiftKey™ keyboard, a well-known
smartphone multilingual keyboard, by a small margin. It seems that neural networks can
learn this task easily and quickly.
NumPy >= 1.11.1
TensorFlow == 1.2
regex (Enables us to use convenient regular expression posix)
janome (for morph analysis)
romkan (for converting kana to romaji)
The modern Japanese writing system employs three
scripts: Hiragana, Katakana, and Chinese characters
(kanji in Japanese).
Hiragana and Katakana are phonetic, while Chinese
characters are not.
In the digital environment, people mostly type Roman
alphabet (a.k.a. Romaji) to write Japanese. Basically, they
rely on the suggestion the transliteration engine returns.
Therefore, how accurately an engine can predict the
word(s) the user has in mind is crucial with respect to a
Japanese keyboard.
Look at the animation on the right. You are to type "nihongo", then the machine shows
日本語 on the suggestion bar.
I frame the problem as a seq2seq task.
Inputs: nihongo。
Outputs: 日本語。
For training, we used Leipzig Japanese Corpus.
For evaluation, 896 Japanese sentences were collected separately. See
data/test.csv .
Abstract
Requirements
Background
Problem Formulation
Data
I adopted the encoder and the first decoder architecture of Tacotron, a speech synthesis
model.
hyperparams.py contains hyperparameters. You can change the value if necessary.
annotate.py makes Romaji-Japanese parallel sentences.
prepro.py defines and makes vocabulary and training data.
modules.py has building blocks for networks.
networks.py has encoding and decoding networks.
data_load.py covers some functions regarding data loading.
utils.py has utility functions.
train.py is about training.
eval.py is about evaluation.
STEP 1. Download Leipzig Japanese Corpus and extract jpn_news_2005-2008_1M-
sentences.txt to data/ folder.
STEP 2. Adjust hyperparameters in hyperparams.py if necessary.
STEP 3. Run python annotate.py .
STEP 4. Run python prepro.py . Or download the preprocessed files.
STEP 5. Run train.py . Or download the pretrained files.
STEP 1. Run eval.py .
STEP 2. Install the latest SwiftKey keyboard app and manually test it for the same
sentences. (Don't worry. You don't have to because I've done it:))
The training curve looks like this.
Model Architecture
Contents
Training
Testing
Results
The evaluation metric is CER (Character Error Rate). Its formula is
|
# Neural Japanese Transliteration—can you do better than SwiftKey™ Keyboard?
## Abstract
In this project, I examine how well neural networks can convert Roman letters into the Japanese script, i.e., Hiragana, Katakana, or Kanji. The accuracy evaluation results for 896 Japanese test sentences outperform the SwiftKey™ keyboard, a well-known smartphone multilingual keyboard, by a small margin. It seems that neural networks can learn this task easily and quickly.
## Requirements
* NumPy >= 1.11.1
* TensorFlow == 1.2
* regex (Enables us to use convenient regular expression posix)
* janome (for morph analysis)
* romkan (for converting kana to romaji)
## Background
<img src="images/swiftkey_ja.gif" width="200" align="right">
* The modern Japanese writing system employs three scripts: Hiragana, Katakana, and Chinese characters (kanji in Japanese).
* Hiragana and Katakana are phonetic, while Chinese characters are not.
* In the digital environment, people mostly type Roman alphabet (a.k.a. Romaji) to write Japanese. Basically, they rely on the suggestion the transliteration engine returns. Therefore, how accurately an engine can predict the word(s) the user has in mind is crucial with respect to a Japanese keyboard.
* Look at the animation on the right. You are to type "nihongo", then the machine shows 日本語 on the suggestion bar.
## Problem Formulation
I frame the problem as a seq2seq task.
Inputs: nihongo。<br>
Outputs: 日本語。
## Data
* For training, we used [Leipzig Japanese Corpus](http://corpora2.informatik.uni-leipzig.de/download.html).
* For evaluation, 896 Japanese sentences were collected separately. See `data/test.csv`.
## Model Architecture
I adopted the encoder and the first decoder architecture of [Tacotron](https://arxiv.org/abs/1703.10135), a speech synthesis model.
## Contents
* `hyperparams.py` contains hyperparameters. You can change the value if necessary.
* `annotate.py` makes Romaji-Japanese parallel sentences.
* `prepro.py` defines and makes vocabulary and training data.
* `modules.py` has building blocks for networks.
* `networks.py` has encoding and decoding networks.
* `data_load.py` covers some functions regarding data loading.
* `utils.py` has utility functions.
* `train.py` is about training.
* `eval.py` is about evaluation.
## Training
* STEP 1. Download [Leipzig Japanese Corpus](http://corpora2.informatik.uni-leipzig.de/downloads/jpn_news_2005-2008_1M-text.tar.gz) and extract `jpn_news_2005-2008_1M-sentences.txt` to `data/` folder.
* STEP 2. Adjust hyperparameters in `hyperparams.py` if necessary.
* STEP 3. Run `python annotate.py`.
* STEP 4. Run `python prepro.py`. Or download the [preprocessed files](https://www.dropbox.com/s/tv81rxcjr3x9eh1/preprocessed.zip?dl=0).
* STEP 5. Run `train.py`. Or download the [pretrained files](https://www.dropbox.com/s/wrbr7tnf4zva4bj/logdir.zip?dl=0).
## Testing
* STEP 1. Run `eval.py`.
* STEP 2. Install the latest SwiftKey keyboard app and manually test it for the same sentences. (Don't worry. You don't have to because I've done it:))
## Results
The training curve looks like this.
<img src="images/training_curve.png">
The evaluation metric is CER (Character Error Rate). Its formula is
* edit distance / # characters = CER.
The following is the results after 13 epochs, or 79,898 global steps. Details are available in `results/*.csv`.
| Proposed (Greedy decoding) | Proposed (Beam decoding) | SwiftKey 6.4.8.57 |
|--- |--- |--- |
|1595/12057=0.132 | 1517/12057=0.125 | 1640/12057=0.136|
|
[
"Language Models",
"Text Generation"
] |
[] |
true |
https://github.com/agatan/yoin
|
2017-01-31T14:02:09Z
|
A Japanese Morphological Analyzer written in pure Rust
|
agatan / yoin
Public
Branches
Tags
Go to file
Go to file
Code
benches
src
tests
yoin-core
yoin-ip…
.gitignore
.travis.…
Cargo.l…
Cargo.…
LICEN…
NOTIC…
READ…
build
build unknown
unknown crates.io
crates.io v0.0.1
v0.0.1
yoin is a Japanese morphological analyze engine written in pure Rust.
mecab-ipadic is embedded in yoin .
About
A Japanese Morphological Analyzer
written in pure Rust
# nlp # rust # japanese
Readme
MIT license
Activity
26 stars
2 watching
2 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Rust 100.0%
Code
Issues
2
Pull requests
Actions
Projects
Security
Insights
Yoin - A Japanese Morphological Analyzer
:) $ yoin
すもももももももものうち
すもも
名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も
助詞,係助詞,*,*,*,*,も,モ,モ
もも
名詞,一般,*,*,*,*,もも,モモ,モモ
README
MIT license
yoin is available on crates.io
yoin can be included in your Cargo project like this:
and write your code like this:
By default, yoin reads lines from stdin, analyzes each line, and outputs
results.
も
助詞,係助詞,*,*,*,*,も,モ,モ
もも
名詞,一般,*,*,*,*,もも,モモ,モモ
の
助詞,連体化,*,*,*,*,の,ノ,ノ
うち
名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ
EOS
Build & Install
CLI
:) $ cargo install yoin
# or
:) $ git clone https://github.com/agatan/yoin
:) $ cd yoin && cargo install
Library
[dependencies]
yoin = "*"
extern crate yoin;
Usage - CLI
:) $ yoin
すもももももももものうち
すもも
名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も
助詞,係助詞,*,*,*,*,も,モ,モ
もも
名詞,一般,*,*,*,*,もも,モモ,モモ
も
助詞,係助詞,*,*,*,*,も,モ,モ
もも
名詞,一般,*,*,*,*,もも,モモ,モモ
の
助詞,連体化,*,*,*,*,の,ノ,ノ
うち
名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ
EOS
そこではなしは終わりになった
そこで
接続詞,*,*,*,*,*,そこで,ソコデ,ソコデ
Or, reads from file.
This software in under the MIT License and contains the MeCab-ipadic
model. See LICENSE and NOTICE.txt for more details.
はなし
名詞,一般,*,*,*,*,はなし,ハナシ,ハナシ
は
助詞,係助詞,*,*,*,*,は,ハ,ワ
終わり
動詞,自立,*,*,五段・ラ行,連用形,終わる,オワリ,オワリ
に
助詞,格助詞,一般,*,*,*,に,ニ,ニ
なっ
動詞,自立,*,*,五段・ラ行,連用タ接続,なる,ナッ,ナッ
た
助動詞,*,*,*,特殊・タ,基本形,た,タ,タ
EOS
:) $ cat input.txt
すもももももももものうち
:) $ yoin --file input.txt
すもも
名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も
助詞,係助詞,*,*,*,*,も,モ,モ
もも
名詞,一般,*,*,*,*,もも,モモ,モモ
も
助詞,係助詞,*,*,*,*,も,モ,モ
もも
名詞,一般,*,*,*,*,もも,モモ,モモ
の
助詞,連体化,*,*,*,*,の,ノ,ノ
うち
名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ
EOS
LICENSE
|
## Yoin - A Japanese Morphological Analyzer
[](https://travis-ci.org/agatan/yoin)
[](https://crates.io/crates/yoin)
`yoin` is a Japanese morphological analyze engine written in pure Rust.
[mecab-ipadic](https://taku910.github.io/mecab/) is embedded in `yoin`.
```sh
:) $ yoin
すもももももももものうち
すもも 名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
の 助詞,連体化,*,*,*,*,の,ノ,ノ
うち 名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ
EOS
```
## Build & Install
*`yoin` is available on [crates.io](https://crates.io)*
### CLI
```sh
:) $ cargo install yoin
# or
:) $ git clone https://github.com/agatan/yoin
:) $ cd yoin && cargo install
```
### Library
yoin can be included in your Cargo project like this:
```toml
[dependencies]
yoin = "*"
```
and write your code like this:
```rust
extern crate yoin;
```
## Usage - CLI
By default, `yoin` reads lines from stdin, analyzes each line, and outputs results.
```sh
:) $ yoin
すもももももももものうち
すもも 名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
の 助詞,連体化,*,*,*,*,の,ノ,ノ
うち 名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ
EOS
そこではなしは終わりになった
そこで 接続詞,*,*,*,*,*,そこで,ソコデ,ソコデ
はなし 名詞,一般,*,*,*,*,はなし,ハナシ,ハナシ
は 助詞,係助詞,*,*,*,*,は,ハ,ワ
終わり 動詞,自立,*,*,五段・ラ行,連用形,終わる,オワリ,オワリ
に 助詞,格助詞,一般,*,*,*,に,ニ,ニ
なっ 動詞,自立,*,*,五段・ラ行,連用タ接続,なる,ナッ,ナッ
た 助動詞,*,*,*,特殊・タ,基本形,た,タ,タ
EOS
```
Or, reads from file.
```sh
:) $ cat input.txt
すもももももももものうち
:) $ yoin --file input.txt
すもも 名詞,一般,*,*,*,*,すもも,スモモ,スモモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
も 助詞,係助詞,*,*,*,*,も,モ,モ
もも 名詞,一般,*,*,*,*,もも,モモ,モモ
の 助詞,連体化,*,*,*,*,の,ノ,ノ
うち 名詞,非自立,副詞可能,*,*,*,うち,ウチ,ウチ
EOS
```
## LICENSE
This software in under the MIT License and contains the MeCab-ipadic model.
See `LICENSE` and `NOTICE.txt` for more details.
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/tmu-nlp/simple-jppdb
|
2017-03-09T08:38:20Z
|
A paraphrase database for Japanese text simplification
|
tmu-nlp / simple-jppdb
Public
Branches
Tags
Go to file
Go to file
Code
READ…
comple…
simple-…
word-l…
日本語のテキスト平易化のために利用可能な2つの大規
模な辞書を構築しました。 Wikipediaから得られる統計
情報をもとに、日本語の各単語に日本語教育語彙表に由
来する3段階の難易度(初級・中級・上級)を付与しま
した。
##1. 単語難易度辞書 (word-level-japanese) 57万単語に
ついて単語 <TAB> 難易度の情報を収録しました。 ただ
し、難易度は1が初級、2が中級、3が上級を意味しま
す。 Wikipediaの本文をMeCabとmecab-ipadic-NEologd
で分割したものを単語とし、出現頻度が5回以上の単語
を対象としました。
##2. 平易な言い換え辞書 (simple-ppdb-japanese) 34万
単語対について言い換え確率 <TAB> 難解な単語 <TAB> 平
易な単語 <TAB> 難解な単語の難易度 <TAB> 平易な単語の難
易度の情報を収録しました。 ただし、難易度は1が初
級、2が中級、3が上級を意味します。 PPDB: Japanese
(日本語言い換えデータベース)のうち、上記の単語難
易度辞書に掲載されている単語のみからなる言い換え対
を対象としました。
About
A paraphrase database for
Japanese text simplification
Readme
Activity
Custom properties
32 stars
5 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Simple PPDB: Japanese
README
日本語教育語彙表に掲載されている単語難易度の情報
は?に置換して隠しています。 日本語教育語彙表の利用
規約に同意できる場合は、まず日本語教育語彙表をダウ
ンロードしてください。 python complement.py
/path/to/日本語教育語彙表.csv を実行すると、すべての
情報が補完されます。
梶原智之, 小町守. Simple PPDB: Japanese. 言語処理学
会第23回年次大会, P8-5, pp.529-532, 2017.
Creative Commons Attribution Share-Alike Licenseでの
利用が可能です。 ただし、日本語教育語彙表の単語難
易度の情報を使う場合は、日本語教育語彙表のライセン
スをご確認ください。
For questions, please contact Tomoyuki Kajiwara at
Tokyo Metropolitan University
注意事項
参考文献
ライセンス
|
# Simple PPDB: Japanese
日本語のテキスト平易化のために利用可能な2つの大規模な辞書を構築しました。
Wikipediaから得られる統計情報をもとに、日本語の各単語に[日本語教育語彙表](http://jhlee.sakura.ne.jp/JEV.html)に由来する3段階の難易度(初級・中級・上級)を付与しました。
##1. 単語難易度辞書 (word-level-japanese)
57万単語について`単語 <TAB> 難易度`の情報を収録しました。
ただし、難易度は1が初級、2が中級、3が上級を意味します。
Wikipediaの本文を[MeCab](http://taku910.github.io/mecab/)と[mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd)で分割したものを単語とし、出現頻度が5回以上の単語を対象としました。
##2. 平易な言い換え辞書 (simple-ppdb-japanese)
34万単語対について`言い換え確率 <TAB> 難解な単語 <TAB> 平易な単語 <TAB> 難解な単語の難易度 <TAB> 平易な単語の難易度`の情報を収録しました。
ただし、難易度は1が初級、2が中級、3が上級を意味します。
[PPDB: Japanese(日本語言い換えデータベース)](http://ahclab.naist.jp/resource/jppdb/)のうち、上記の単語難易度辞書に掲載されている単語のみからなる言い換え対を対象としました。
## 注意事項
[日本語教育語彙表](http://jhlee.sakura.ne.jp/JEV.html)に掲載されている単語難易度の情報は?に置換して隠しています。
日本語教育語彙表の利用規約に同意できる場合は、まず日本語教育語彙表をダウンロードしてください。
`python complement.py /path/to/日本語教育語彙表.csv`を実行すると、すべての情報が補完されます。
## 参考文献
梶原智之, 小町守. Simple PPDB: Japanese. 言語処理学会第23回年次大会, P8-5, pp.529-532, 2017.
## ライセンス
[Creative Commons Attribution Share-Alike License](http://creativecommons.org/licenses/by-sa/3.0/)での利用が可能です。
ただし、[日本語教育語彙表](http://jhlee.sakura.ne.jp/JEV.html)の単語難易度の情報を使う場合は、日本語教育語彙表のライセンスをご確認ください。
For questions, please contact [Tomoyuki Kajiwara at Tokyo Metropolitan University](https://sites.google.com/site/moguranosenshi/).
|
[
"Paraphrasing",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/twada/japanese-numerals-to-number
|
2017-03-18T04:14:04Z
|
Converts Japanese Numerals into number
|
twada / japanese-numerals-to-number
Public
Branches
Tags
Go to file
Go to file
Code
.github/workflows
test
.gitignore
CHANGELOG.…
LICENSE
README.md
index.js
package-lock.j…
package.json
Converts Japanese Numerals into number .
Node.js CI
Node.js CI
no status
no status
code style
code style semistandard
semistandard license
license MIT
MIT
About
Converts Japanese Numerals into
number
Readme
MIT license
Activity
58 stars
3 watching
1 fork
Report repository
Releases
4 tags
Packages
No packages published
Contributors
2
Languages
JavaScript 100.0%
Code
Issues
Pull requests
1
Actions
Projects
Security
Insights
japanese-numerals-to-number
USAGE
const ja2num = require('japanese-numerals-to-number');
const assert = require('assert');
assert(ja2num('〇') === 0);
assert(ja2num('一億二千三百四十五万六千七百八十九') === 123456789);
assert(ja2num('二千十七') === 2017);
assert(ja2num('二〇一七') === 2017); // supports positional notation
assert.throws(() => ja2num(null), TypeError);
assert.throws(() => ja2num('二十三十'), Error);
assert.throws(() => ja2num('億千万'), Error);
assert(ja2num('壱百壱拾') === 110); // supports formal numerals (daiji) used in legal documents
assert.throws(() => ja2num('一百一十'), Error);
README
MIT license
Supports Japanese Numerals between 0 (that is '〇' ) and Number.MAX_SAFE_INTEGER
( 9007199254740991 , that is '九千七兆千九百九十二億五千四百七十四万九百九十一' ). Any number larger than
Number.MAX_SAFE_INTEGER is not guaranteed.
Throws TypeError when argument is not a string.
Throws Error when argument is an invalid Japanese Numerals.
〇, 一, 二, 三, 四, 五, 六, 七, 八, 九
十, 百, 千, 万, 億, 兆
壱, 弐, 参, 拾
Takuto Wada
API
var convertedNum = ja2num(stringOfJapaneseNumerals);
supported characters
numbers 0 to 9
names of powers of 10
formal numerals (daiji) used in legal documents
INSTALL
$ npm install japanese-numerals-to-number
AUTHOR
LICENSE
|
japanese-numerals-to-number
================================
Converts [Japanese Numerals](https://en.wikipedia.org/wiki/Japanese_numerals) into `number`.
[![Build Status][ci-image]][ci-url]
[![NPM version][npm-image]][npm-url]
[![Code Style][style-image]][style-url]
[![License][license-image]][license-url]
USAGE
---------------------------------------
```js
const ja2num = require('japanese-numerals-to-number');
const assert = require('assert');
assert(ja2num('〇') === 0);
assert(ja2num('一億二千三百四十五万六千七百八十九') === 123456789);
assert(ja2num('二千十七') === 2017);
assert(ja2num('二〇一七') === 2017); // supports positional notation
assert.throws(() => ja2num(null), TypeError);
assert.throws(() => ja2num('二十三十'), Error);
assert.throws(() => ja2num('億千万'), Error);
assert(ja2num('壱百壱拾') === 110); // supports formal numerals (daiji) used in legal documents
assert.throws(() => ja2num('一百一十'), Error);
```
API
---------------------------------------
### var convertedNum = ja2num(stringOfJapaneseNumerals);
- Supports Japanese Numerals between `0` (that is `'〇'`) and [Number.MAX_SAFE_INTEGER](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) (`9007199254740991`, that is `'九千七兆千九百九十二億五千四百七十四万九百九十一'`). Any number larger than `Number.MAX_SAFE_INTEGER` is not guaranteed.
- Throws `TypeError` when argument is not a string.
- Throws `Error` when argument is an invalid Japanese Numerals.
### supported characters
#### numbers 0 to 9
- `〇`, `一`, `二`, `三`, `四`, `五`, `六`, `七`, `八`, `九`
#### names of powers of 10
- `十`, `百`, `千`, `万`, `億`, `兆`
#### [formal numerals (daiji) used in legal documents](https://en.wikipedia.org/wiki/Japanese_numerals#Formal_numbers)
- `壱`, `弐`, `参`, `拾`
INSTALL
---------------------------------------
```sh
$ npm install japanese-numerals-to-number
```
AUTHOR
---------------------------------------
* [Takuto Wada](https://github.com/twada)
LICENSE
---------------------------------------
Licensed under the [MIT](https://github.com/twada/japanese-numerals-to-number/blob/master/LICENSE) license.
[npm-url]: https://npmjs.org/package/japanese-numerals-to-number
[npm-image]: https://badge.fury.io/js/japanese-numerals-to-number.svg
[ci-image]: https://github.com/twada/japanese-numerals-to-number/workflows/Node.js%20CI/badge.svg
[ci-url]: https://github.com/twada/japanese-numerals-to-number/actions?query=workflow%3A%22Node.js+CI%22
[style-url]: https://github.com/Flet/semistandard
[style-image]: https://img.shields.io/badge/code%20style-semistandard-brightgreen.svg
[license-url]: https://github.com/twada/japanese-numerals-to-number/blob/master/LICENSE
[license-image]: https://img.shields.io/badge/license-MIT-brightgreen.svg
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/Greatdane/Convert-Numbers-to-Japanese
|
2017-03-24T12:30:01Z
|
Converts Arabic numerals, or 'western' style numbers, to a Japanese context.
|
Greatdane / Convert-Numbers-to-Japanese
Public
Branches
Tags
Go to file
Go to file
Code
Conver…
LICEN…
READ…
Converts Arabic numerals, or 'western' style numbers to
a Japanese context.
python
python 3.x
3.x
Types; "kanji", "hiragana", "romaji", "all"
"all" will return a list of kanji, hiragana and romaji
conversions
Can also convert Kanji to 'western' number using:
About
Converts Arabic numerals, or
'western' style numbers, to a
Japanese context.
www.japanesenumberconverter.…
# japanese # japanese-language
Readme
MIT license
Activity
44 stars
2 watching
6 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Python 100.0%
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
Convert-Numbers-to-
Japanese
Usage:
Convert(number,type)
ConvertKanji(kanji)
Examples:
Convert(20.8,"romaji")
Convert(2000,"hiragana")
Convert(458,"kanji")
README
MIT license
Online version available here:
https://www.japanesenumberconverter.com/
Convert("458222","kanji")
Convert(31400,"all")
ConvertKanji("二十点五")
ConvertKanji("二十五")
|
# Convert-Numbers-to-Japanese
Converts Arabic numerals, or 'western' style numbers to a Japanese context.

## Usage:
``` bash
Convert(number,type)
```
**Types;** "kanji", "hiragana", "romaji", "all"\
"all" will return a list of kanji, hiragana and romaji conversions
Can also convert Kanji to 'western' number using:
``` bash
ConvertKanji(kanji)
```
### Examples:
``` bash
Convert(20.8,"romaji")
Convert(2000,"hiragana")
Convert(458,"kanji")
Convert("458222","kanji")
Convert(31400,"all")
```
``` bash
ConvertKanji("二十点五")
ConvertKanji("二十五")
```
Online version available here: https://www.japanesenumberconverter.com/
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/esrille/ibus-hiragana
|
2017-04-27T18:50:59Z
|
ひらがなIME for IBus
|
esrille / ibus-hiragana
Public
Branches
Tags
Go to file
Go to file
Code
data
debian
dic
dic_tools
docs
docs_md
engine
layouts
man
meson
po
setup
tests
CONT…
COPYI…
LICEN…
NOTICE
READ…
READ…
READ…
ibus-hi…
About
ひらがなIME for IBus
esrille.github.io/ibus-hiragana/
# ime # ibus
Readme
Apache-2.0, GPL-2.0 licenses found
Activity
Custom properties
68 stars
5 watching
4 forks
Report repository
Releases 42
ibus-hiragana-1.0
Latest
last week
+ 41 releases
Languages
Python 92.5%
Meson 4.3%
Shell 1.9%
HTML 1.3%
Code
Issues
5
Pull requests
Actions
Security
Insights
meson…
meson…
require…
「ひらがなIME」は、平明な日本語の文章を入力しやす
へいめい
にほんご
ぶんしょう
にゅうりょく
くした日本語インプットメソッドです。これまでのインプ
にほんご
ットメソッドでは、むずかしい漢字のおおい文章をかいて
かんじ
ぶんしょう
しまいがちでした。自分でもよめないような漢字をつかっ
じぶん
かんじ
てしまっていることもよくありました。
「ひらがなIME」の漢字辞書は、人名や地名をのぞけば、
かんじじしょ
じんめい
ちめい
常用漢字表のなかの漢字だけでつくられています。ひつ
じょうようかんじひょう
かんじ
ようがあれば、学校でそれまでにならった漢字だけをつか
がっこう
かんじ
うように設定することもできます。
せってい
「ひらがなIME」では、文字を入力する作業と、漢字に
も
じ
にゅうりょく
さぎょう
かんじ
変換する作業は、かんぜんにわかれています。キーボード
へんかん
さぎょう
でうった文字は、漢字に変換されることなく、ひらがなだ
も
じ
かんじ
へんかん
けの文章としてうちだされます。漢字をつかいたいとき
ぶんしょう
かんじ
は、本文中のひらがなをあとから、いつでも漢字におきか
ほんぶんちゅう
かんじ
えることができます。入力中にひとつひとつ漢字に変換
にゅうりょくちゅう
かんじ
へんかん
していく必要はありません。
ひつよう
ひらがなIME for IBus
アイエムイーフォーアイバス
README
Apache-2.0 license
GPL-2.0 license
また、「ふりがなパッド」を「ひらがなIME」といっしょ
につかうと、入力した漢字に自動的にふりがなをつけて
にゅうりょく
かんじ
じどうてき
いくことができます。いまでは、だれもがかんたんに漢字
かんじ
をよめるわけではないことがわかってきました。学校のデ
がっこう
ジタル教科書も総ルビのものが用意されるようになってい
きょうかしょ
そう
ようい
ます。これからは、漢字をよめなくてもこまらないような
かんじ
社会にしていくこともたいせつなことです。
しゃかい
しりょう
|
# Hiragana IME for IBus
**Hiragana IME** is a Japanese input method that makes it easier to input plain Japanese sentences. With previous input methods, people often wrote sentences with difficult kanji characters. Many found themselves using kanji they couldn't even read.
The kanji dictionary in **Hiragana IME** is assembled using only the kanji characters from the Jōyō Kanji list, excluding names of people and places. If necessary, you can also select a kanji dictionary that includes only the kanji characters you've learned so far at school.
In **Hiragana IME**, the steps for inputting characters and converting them to kanji are separate. As you type on the keyboard, characters are entered as hiragana-only sentences without automatic kanji conversion. If you want to use kanji, you can easily replace hiragana characters anywhere in the text with kanji. There's no need to convert hiragana into kanji while typing.

Additionally, when using the [FuriganaPad](https://github.com/esrille/furiganapad) with **Hiragana IME**, you can automatically add Furigana, or readings, to the inputted kanji. Not everyone can easily read kanji due to dyslexia and other reasons. Digital Japanese textbooks now include Furigana for Japanese schoolchildren. It is crucial to reshape our society so that no one is left behind just because they cannot read kanji.
|
[
"Language Models",
"Syntactic Text Processing"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/borh/aozora-corpus-generator
|
2017-10-09T07:11:25Z
|
Generates plain or tokenized text files from the Aozora Bunko
|
borh / aozora-corpus-generator
Public
Branches
Tags
Go to file
Go to file
Code
libs
.gitignore
LICENSE
Pipfile
Pipfile.lock
README.md
aozora-corpus-generator…
author-title.csv
jisx0213-2004-std.txt
ndc-3digits.tsv
supplemental-authors.csv
unidic2udpos.py
Generates plain or tokenized text files from the Aozora Bunko [English] for use in corpus-based studies.
Primarily for use in an upcoming research project.
WARNING: Currently, the tool requires a checked-out repository of the Aozora Bunko. A git clone will take up to several hours and take up
around 14GB of space. Future versions will ease this requirement.
You must install MeCab and UniDic.
On Debian-based distros, the command below should suffice:
MacOS users can install the native dependencies with:
About
Generates plain or tokenized text
files from the Aozora Bunko
# python # japanese # corpus # mecab
# aozorabunko # aozora-bunko
Readme
BSD-2-Clause license
Activity
8 stars
2 watching
4 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
borh Bor Hodošček
dependabot[bot]
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Aozora Bunko Corpus Generator
Goals
Requirements
Aozora Bunko Repository
Native
sudo apt install -y mecab libmecab-dev unidic-mecab
brew install mecab mecab-unidic
README
BSD-2-Clause license
Python 3 is required. All testing is done on the latest stable version (currently 3.6.2), but a slightly older version should also work. Native
dependencies must be installed before installing the Python dependencies (natto-py needs MeCab).
This project uses pipenv. For existing users, the command below should suffice:
For those using pip , you can install all the dependencies using the command below:
Clone the repository and run:
You may also use the Pipenv script shortcut to run the program:
Python
pipenv install
pipenv shell
pip install natto-py jaconv lxml html5_parser
Usage
git clone https://github.com/borh/aozora-corpus-generator.git
cd aozora-corpus-generator
pipenv install
pipenv shell
python aozora-corpus-generator.py --features 'orth' --author-title-csv 'author-title.csv' --out 'Corpora/Japanese' --
pipenv run aozora --features 'orth' --author-title-csv 'author-title.csv' --out 'Corpora/Japanese' --parallel
Parameters
python aozora-corpus-generator.py --help
usage: aozora-corpus-generator.py [-h] [--features FEATURES [FEATURES ...]]
[--features-opening-delim FEATURES_OPENING_DELIM]
[--features-closing-delim FEATURES_CLOSING_DELIM]
[--author-title-csv AUTHOR_TITLE_CSV [AUTHOR_TITLE_CSV ...]]
[--aozora-bunko-repository AOZORA_BUNKO_REPOSITORY]
--out OUT [--all] [--min-tokens MIN_TOKENS]
[--no-punc] [--incremental] [--parallel]
[--verbose]
aozora-corpus-generator extracts given author and book pairs from Aozora Bunko and formats them into (optionally
tokenized) plain text files.
optional arguments:
-h, --help show this help message and exit
--features FEATURES [FEATURES ...]
specify which features should be extracted from
morphemes (default='orth')
--features-opening-delim FEATURES_OPENING_DELIM
specify opening char to use when outputting multiple
features
--features-closing-delim FEATURES_CLOSING_DELIM
specify closing char to use when outputting multiple
features
--author-title-csv AUTHOR_TITLE_CSV [AUTHOR_TITLE_CSV ...]
one or more UTF-8 formatted CSV input file(s)
(default='author-title.csv')
--aozora-bunko-repository AOZORA_BUNKO_REPOSITORY
path to the aozorabunko git repository (default='aozor
abunko/index_pages/list_person_all_extended_utf8.zip')
--out OUT output (plain, tokenized) files into given output
directory (default=Corpora)
--all specify if all Aozora Bunko texts should be extracted,
ignoring the author-title.csv (default=False)
--min-tokens MIN_TOKENS
specify minimum token count to filter files by
(default=30000)
--no-punc specify if punctuation should be discarded from
You may specify multiple values for the --features and author-title-csv parameters by putting a space between them like so: --
features orth lemma pos1 .
"Gaiji" characters with provided JIS X 0213 codepoints are converted to their equivalent Unicode codepoint. Aozora Bunko is conservative
in encoding rare Kanji, and, therefore, uses images (html version) or textual descriptions (plaintext version).
Words are sometimes emphasized in Japanese text with dots above characters, while Aozora Bunko uses bold text in their place.
Emphasis tags are currently stripped.
tokenized version (default=False)
--incremental do not overwrite existing corpus files (default=False)
--parallel specify if processing should be done in parallel
(default=True)
--verbose turns on verbose logging (default=False)
Example usage:
python aozora-corpus-generator.py --features 'orth' --author-title-csv 'author-title.csv' --out 'Corpora/Japanese'
--parallel
Issues
|
# Aozora Bunko Corpus Generator
Generates plain or tokenized text files from the [Aozora Bunko](http://www.aozora.gr.jp/) [[English](https://en.wikipedia.org/wiki/Aozora_Bunko)] for use in corpus-based studies.
# Goals
Primarily for use in an upcoming research project.
# Requirements
## Aozora Bunko Repository
**WARNING**:
Currently, the tool requires a [checked-out repository of the Aozora Bunko](https://github.com/aozorabunko/aozorabunko).
A git clone will take up to several hours and take up around **14**GB of space.
Future versions will ease this requirement.
## Native
You must install [MeCab](https://github.com/taku910/mecab) and [UniDic](https://osdn.net/projects/unidic/).
On Debian-based distros, the command below should suffice:
```bash
sudo apt install -y mecab libmecab-dev unidic-mecab
```
MacOS users can install the native dependencies with:
```bash
brew install mecab mecab-unidic
```
## Python
Python 3 is required. All testing is done on the latest stable version (currently 3.6.2), but a slightly older version should also work.
Native dependencies must be installed before installing the Python dependencies (natto-py needs MeCab).
This project uses [pipenv](https://github.com/kennethreitz/pipenv).
For existing users, the command below should suffice:
```bash
pipenv install
pipenv shell
```
For those using `pip`, you can install all the dependencies using the command below:
```bash
pip install natto-py jaconv lxml html5_parser
```
# Usage
Clone the repository and run:
```bash
git clone https://github.com/borh/aozora-corpus-generator.git
cd aozora-corpus-generator
pipenv install
pipenv shell
python aozora-corpus-generator.py --features 'orth' --author-title-csv 'author-title.csv' --out 'Corpora/Japanese' --parallel
```
You may also use the Pipenv script shortcut to run the program:
```bash
pipenv run aozora --features 'orth' --author-title-csv 'author-title.csv' --out 'Corpora/Japanese' --parallel
```
## Parameters
```bash
python aozora-corpus-generator.py --help
```
usage: aozora-corpus-generator.py [-h] [--features FEATURES [FEATURES ...]]
[--features-opening-delim FEATURES_OPENING_DELIM]
[--features-closing-delim FEATURES_CLOSING_DELIM]
[--author-title-csv AUTHOR_TITLE_CSV [AUTHOR_TITLE_CSV ...]]
[--aozora-bunko-repository AOZORA_BUNKO_REPOSITORY]
--out OUT [--all] [--min-tokens MIN_TOKENS]
[--no-punc] [--incremental] [--parallel]
[--verbose]
aozora-corpus-generator extracts given author and book pairs from Aozora Bunko and formats them into (optionally tokenized) plain text files.
optional arguments:
-h, --help show this help message and exit
--features FEATURES [FEATURES ...]
specify which features should be extracted from
morphemes (default='orth')
--features-opening-delim FEATURES_OPENING_DELIM
specify opening char to use when outputting multiple
features
--features-closing-delim FEATURES_CLOSING_DELIM
specify closing char to use when outputting multiple
features
--author-title-csv AUTHOR_TITLE_CSV [AUTHOR_TITLE_CSV ...]
one or more UTF-8 formatted CSV input file(s)
(default='author-title.csv')
--aozora-bunko-repository AOZORA_BUNKO_REPOSITORY
path to the aozorabunko git repository (default='aozor
abunko/index_pages/list_person_all_extended_utf8.zip')
--out OUT output (plain, tokenized) files into given output
directory (default=Corpora)
--all specify if all Aozora Bunko texts should be extracted,
ignoring the author-title.csv (default=False)
--min-tokens MIN_TOKENS
specify minimum token count to filter files by
(default=30000)
--no-punc specify if punctuation should be discarded from
tokenized version (default=False)
--incremental do not overwrite existing corpus files (default=False)
--parallel specify if processing should be done in parallel
(default=True)
--verbose turns on verbose logging (default=False)
Example usage:
python aozora-corpus-generator.py --features 'orth' --author-title-csv 'author-title.csv' --out 'Corpora/Japanese' --parallel
You may specify multiple values for the `--features` and `author-title-csv` parameters by putting a space between them like so: `--features orth lemma pos1`.
# Issues
- "Gaiji" characters with provided JIS X 0213 codepoints are converted to their equivalent Unicode codepoint. Aozora Bunko is conservative in encoding rare Kanji, and, therefore, uses images (html version) or textual descriptions (plaintext version).
- Words are sometimes emphasized in Japanese text with dots above characters, while Aozora Bunko uses bold text in their place. Emphasis tags are currently stripped.
|
[
"Text Segmentation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/rpryzant/JESC
|
2017-10-25T14:41:34Z
|
A large parallel corpus of English and Japanese
|
rpryzant / JESC
Public
Branches
Tags
Go to file
Go to file
Code
corpus…
corpus…
corpus…
READ…
require…
Welcome to the JESC code release! This repo contains
the crawlers, parsers, aligners, and various tools used to
create the Japanese-English Subtitle Corpus (JESC).
dataset homepage
paper presenting this dataset
Use pip: pip install -r requirements.txt
Additionally, some of the corpus_processing scripts
make use of google/sentencepiece, which has
installation instructions on its github page.
Each file is a standalone tool with usage instructions
given in the comment header. These files are organized
into the following categories (subdirectories):
About
A large parallel corpus of English
and Japanese
Readme
Activity
79 stars
7 watching
12 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 95.0%
Shell 5.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
JESC Code Release
Requirements
Instructions
README
corpus_generation: Scripts for downloading,
parsing, and aligning subtitles from the internet.
corpus_cleaning: Scripts for converting file
formats, thresholding on length ratios, and
spellchecking.
corpus_processing: Scripts for manipulating
completed datasets, including tokenization and
train/test/dev splitting.
Please give the proper citation or credit if you use these
data:
Citation
@ARTICLE{pryzant_jesc_2017,
author = {{Pryzant}, R. and {Chung},
Y. and {Jurafsky}, D. and {Britz}, D.},
title = "{JESC: Japanese-English
Subtitle Corpus}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1710.10639},
keywords = {Computer Science -
Computation and Language},
year = 2017,
month = oct,
} ```
|
# JESC Code Release
Welcome to the JESC code release! This repo contains the crawlers, parsers, aligners, and various tools used to create the Japanese-English Subtitle Corpus (JESC).
* [dataset homepage](https://cs.stanford.edu/~rpryzant/jesc/)
* [paper presenting this dataset](https://arxiv.org/abs/1710.10639)
## Requirements
Use pip: `pip install -r requirements.txt`
Additionally, some of the corpus_processing scripts make use of [google/sentencepiece](https://github.com/google/sentencepiece), which has installation instructions on its github page.
## Instructions
Each file is a standalone tool with usage instructions given in the comment header. These files are organized into the following categories (subdirectories):
* **corpus_generation**: Scripts for downloading, parsing, and aligning subtitles from the internet.
* **corpus_cleaning**: Scripts for converting file formats, thresholding on length ratios, and spellchecking.
* **corpus_processing**: Scripts for manipulating completed datasets, including tokenization and train/test/dev splitting.
## Citation
Please give the proper citation or credit if you use these data:
```
@ARTICLE{pryzant_jesc_2017,
author = {{Pryzant}, R. and {Chung}, Y. and {Jurafsky}, D. and {Britz}, D.},
title = "{JESC: Japanese-English Subtitle Corpus}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1710.10639},
keywords = {Computer Science - Computation and Language},
year = 2017,
month = oct,
} ```
|
[
"Machine Translation",
"Multilinguality"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/ktnyt/go-moji
|
2018-01-15T14:30:36Z
|
A Go library for Zenkaku/Hankaku conversion
|
ktnyt / go-moji
Public
Branches
Tags
Go to file
Go to file
Code
.circleci
.gitignore
LICEN…
READ…
conver…
conver…
default…
diction…
maybe…
Build Status go report
go report A+
A+
GoDoc
This package provides a Go interface for converting between Zenkaku (全
角 i.e. full-width) and Hankaku (半角 i.e. half-width) characters (mostly for
Japanese). The library has been largely influenced by niwaringo/moji the
JavaScript implementation.
For detailed information of the API, see the documents.
Use go get :
About
A Go library for Zenkaku/Hankaku
conversion
# go # japanese # conversion
Readme
MIT license
Activity
19 stars
3 watching
2 forks
Report repository
Releases 1
Version 1.0.0
Latest
on Jan 16, 2018
Packages
No packages published
Contributors
2
Languages
Go 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
go-moji
Installation
$ go get github.com/ktnyt/go-moji
README
MIT license
This package has only been tested on Go >= 1.8. Beware when using
lower versions.
Copyright (C) 2018 by Kotone Itaya <kotone [at] sfc.keio.ac.jp>
go-moji is released under the terms of the MIT License. See LICENSE for
details.
Requirements
Example
package main
import (
"fmt"
"github.com/ktnyt/go-moji"
)
func main() {
s := "ABC ABC あがぱ アガパ アガパ"
// Convert Zenkaku Eisuu to Hankaku Eisuu
fmt.Println(moji.Convert(s, moji.ZE, moji.HE))
// Convert Hankaku Eisuu to Zenkaku Eisuu
fmt.Println(moji.Convert(s, moji.HE, moji.ZE))
// Convert HiraGana to KataKana
fmt.Println(moji.Convert(s, moji.HG, moji.KK))
// Convert KataKana to HiraGana
fmt.Println(moji.Convert(s, moji.KK, moji.HG))
// Convert Zenkaku Katakana to Hankaku Katakana
fmt.Println(moji.Convert(s, moji.ZK, moji.HK))
// Convert Hankaku Katakana to Zenkaku Katakana
fmt.Println(moji.Convert(s, moji.HK, moji.ZK))
// Convert Zenkaku Space to Hankaku Space
fmt.Println(moji.Convert(s, moji.ZS, moji.HS))
// Convert Hankaku Space to Zenkaku Space
fmt.Println(moji.Convert(s, moji.HS, moji.ZS))
}
Copyright
|
# go-moji
[](https://circleci.com/gh/ktnyt/go-moji)
[](https://goreportcard.com/report/github.com/ktnyt/go-moji)
[](http://godoc.org/github.com/ktnyt/go-moji)
This package provides a Go interface for converting between Zenkaku (全角 i.e. full-width) and Hankaku (半角 i.e. half-width) characters (mostly for Japanese). The library has been largely influenced by [niwaringo/moji](https://github.com/niwaringo/moji) the JavaScript implementation.
For detailed information of the API, see the [documents](https://godoc.org/github.com/ktnyt/go-moji).
## Installation
Use `go get`:
```sh
$ go get github.com/ktnyt/go-moji
```
## Requirements
This package has only been tested on Go >= 1.8. Beware when using lower versions.
## Example
```go
package main
import (
"fmt"
"github.com/ktnyt/go-moji"
)
func main() {
s := "ABC ABC あがぱ アガパ アガパ"
// Convert Zenkaku Eisuu to Hankaku Eisuu
fmt.Println(moji.Convert(s, moji.ZE, moji.HE))
// Convert Hankaku Eisuu to Zenkaku Eisuu
fmt.Println(moji.Convert(s, moji.HE, moji.ZE))
// Convert HiraGana to KataKana
fmt.Println(moji.Convert(s, moji.HG, moji.KK))
// Convert KataKana to HiraGana
fmt.Println(moji.Convert(s, moji.KK, moji.HG))
// Convert Zenkaku Katakana to Hankaku Katakana
fmt.Println(moji.Convert(s, moji.ZK, moji.HK))
// Convert Hankaku Katakana to Zenkaku Katakana
fmt.Println(moji.Convert(s, moji.HK, moji.ZK))
// Convert Zenkaku Space to Hankaku Space
fmt.Println(moji.Convert(s, moji.ZS, moji.HS))
// Convert Hankaku Space to Zenkaku Space
fmt.Println(moji.Convert(s, moji.HS, moji.ZS))
}
```
## Copyright
Copyright (C) 2018 by Kotone Itaya <kotone [at] sfc.keio.ac.jp>
go-moji is released under the terms of the MIT License.
See [LICENSE](https://github.com/ktnyt/go-moji/blob/master/LICENSE) for details.
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/PSeitz/wana_kana_rust
|
2018-02-02T17:37:29Z
|
and Romaji
|
PSeitz / wana_kana_rust
Public
Branches
Tags
Go to file
Go to file
Code
.github…
.idea
bench…
benches
nodejs…
src
tests
upstre…
wana_…
.gitignore
.travis.…
Cargo.…
LICEN…
READ…
covera…
make_…
rustfmt…
crates.io
crates.io v4.0.0
v4.0.0 docs
docs passing
passing build
build passing
passing coverage
coverage
100%
100%
About
Utility library for checking and
converting between Japanese
characters - Hiragana, Katakana -
and Romaji
# converter # katakana # japanese # romaji
# kana # romaji-translation
Readme
MIT license
Activity
70 stars
4 watching
14 forks
Report repository
Releases 2
v4.0.0
Latest
on Oct 2
+ 1 release
Packages
No packages published
Contributors
5
Languages
Rust 99.1%
Other 0.9%
Code
Issues
3
Pull requests
1
Actions
Projects
Security
Insights
README
MIT license
Utility library for checking and converting between Japanese characters -
Hiragana, Katakana - and Romaji (Ported from
https://github.com/WaniKani/WanaKana V4.0.2)
On Migrating to 2.0 some performance improvements have been implemented
by using more efficient lookup structures and avoiding allocations. According to
these results around 1000 words can be converted per millisecond on a Core
i7-6700.
WanaKana Rust
ワナカナ <--> WanaKana <--> わなかな
[dependencies]
wana_kana = "4.0"
Examples
use wana_kana::to_romaji::*;
use wana_kana::to_kana::*;
use wana_kana::to_hiragana::*;
use wana_kana::Options;
assert_eq!(to_romaji("ワナカナ"), "wanakana");
assert_eq!(to_hiragana("WanaKana"), "わなかな");
assert_eq!(to_kana("WANAKANA"), "ワナカナ");
Tests
Performance
bench_hiragana_to_romaji 3,519 1,070
-2,449 -69.59% x 3.29
bench_kana_1 3,066 567
-2,499 -81.51% x 5.41
bench_kana_2 8,006 1,831
-6,175 -77.13% x 4.37
A detailed analysis has been done in the bench_compare subfolder, the
analysis below may be inaccurate.
A short comparison suggests around 25x performance
node -r esm run.js
bench_katakana_to_hiragana 2,512 622
-1,890 -75.24% x 4.04
bench_katakana_to_katakana 1,664 629
-1,035 -62.20% x 2.65
bench_katakana_to_romaji 6,922 1,067
-5,855 -84.59% x 6.49
bench_romaji_to_hiragana 3,802 1,300
-2,502 -65.81% x 2.92
bench_romaji_to_katakana 4,361 1,929
-2,432 -55.77% x 2.26
Comparison To WanaKana
import toKana from './src/toKana';
import toHiragana from './src/toHiragana';
import toKatakana from './src/toKatakana';
import toRomaji from './src/toRomaji';
console.time("yo")
for (var i = 0; i < 1000; i++) {
toKana('aiueosashisusesonaninunenokakikukeko')
toKana('AIUEOSASHISUSESONANINUNENOKAKIKUKEKO')
toHiragana('aiueosashisusesonaninunenokakikukeko')
toHiragana('アイウエオサシスセソナニヌネノカキクケコ')
toKatakana('aiueosashisusesonaninunenokakikukeko')
toKatakana('あいうえおさしすせそなにぬねのかきくけこ')
toRomaji('あいうえおさしすせそなにぬねのかきくけこ')
toRomaji('アイウエオサシスセソナニヌネノカキクケコ')
}
console.timeEnd("yo")
extern crate wana_kana;
use wana_kana::to_hiragana::to_hiragana;
use wana_kana::to_katakana::to_katakana;
use wana_kana::to_romaji::to_romaji;
use wana_kana::to_kana::*;
fn main() {
let start = std::time::Instant::now();
for _ in 0..1000 {
to_kana("aiueosashisusesonaninunenokakikukeko");
to_kana("AIUEOSASHISUSESONANINUNENOKAKIKUKEKO");
to_hiragana("aiueosashisusesonaninunenokakikukeko");
node -r esm run.js 253.231ms
cargo run --release --bin bench 9ms
cargo install wana_kana will install 2 CLI tools: to_kana and to_romaji .
Both commands support piping ls | to kana and parameters to romaji へ
to_hiragana("アイウエオサシスセソナニヌネノカキクケコ");
to_katakana("aiueosashisusesonaninunenokakikukeko");
to_katakana("あいうえおさしすせそなにぬねのかきくけこ");
to_romaji("あいうえおさしすせそなにぬねのかきくけこ");
to_romaji("アイウエオサシスセソナニヌネノカキクケコ");
}
println!("{:?}", start.elapsed().as_millis());
}
CLI
Convert to kana and back for fun and profit
|
[](https://crates.io/crates/wana_kana)
[](https://docs.rs/crate/wana_kana/)
[](https://travis-ci.org/PSeitz/wana_kana_rust)
[](https://coveralls.io/github/PSeitz/wana_kana_rust?branch=master)
## WanaKana Rust
### ワナカナ <--> WanaKana <--> わなかな
```toml,ignore
[dependencies]
wana_kana = "4.0"
```
Utility library for checking and converting between Japanese characters - Hiragana, Katakana - and Romaji (Ported from https://github.com/WaniKani/WanaKana V4.0.2)
## Examples
```
use wana_kana::to_romaji::*;
use wana_kana::to_kana::*;
use wana_kana::to_hiragana::*;
use wana_kana::Options;
assert_eq!(to_romaji("ワナカナ"), "wanakana");
assert_eq!(to_hiragana("WanaKana"), "わなかな");
assert_eq!(to_kana("WANAKANA"), "ワナカナ");
```
## Tests

## Performance
On Migrating to 2.0 some performance improvements have been implemented by using more efficient lookup structures and avoiding allocations.
According to these results around 1000 words can be converted per millisecond on a Core i7-6700.
```
bench_hiragana_to_romaji 3,519 1,070 -2,449 -69.59% x 3.29
bench_kana_1 3,066 567 -2,499 -81.51% x 5.41
bench_kana_2 8,006 1,831 -6,175 -77.13% x 4.37
bench_katakana_to_hiragana 2,512 622 -1,890 -75.24% x 4.04
bench_katakana_to_katakana 1,664 629 -1,035 -62.20% x 2.65
bench_katakana_to_romaji 6,922 1,067 -5,855 -84.59% x 6.49
bench_romaji_to_hiragana 3,802 1,300 -2,502 -65.81% x 2.92
bench_romaji_to_katakana 4,361 1,929 -2,432 -55.77% x 2.26
```
### Comparison To [WanaKana](https://github.com/WaniKani/WanaKana)
A detailed analysis has been done in the [bench_compare](bench_compare/README.md) subfolder, the analysis below may be inaccurate.
A short comparison suggests around 25x performance
```javascript
import toKana from './src/toKana';
import toHiragana from './src/toHiragana';
import toKatakana from './src/toKatakana';
import toRomaji from './src/toRomaji';
console.time("yo")
for (var i = 0; i < 1000; i++) {
toKana('aiueosashisusesonaninunenokakikukeko')
toKana('AIUEOSASHISUSESONANINUNENOKAKIKUKEKO')
toHiragana('aiueosashisusesonaninunenokakikukeko')
toHiragana('アイウエオサシスセソナニヌネノカキクケコ')
toKatakana('aiueosashisusesonaninunenokakikukeko')
toKatakana('あいうえおさしすせそなにぬねのかきくけこ')
toRomaji('あいうえおさしすせそなにぬねのかきくけこ')
toRomaji('アイウエオサシスセソナニヌネノカキクケコ')
}
console.timeEnd("yo")
```
`node -r esm run.js`
```rust
extern crate wana_kana;
use wana_kana::to_hiragana::to_hiragana;
use wana_kana::to_katakana::to_katakana;
use wana_kana::to_romaji::to_romaji;
use wana_kana::to_kana::*;
fn main() {
let start = std::time::Instant::now();
for _ in 0..1000 {
to_kana("aiueosashisusesonaninunenokakikukeko");
to_kana("AIUEOSASHISUSESONANINUNENOKAKIKUKEKO");
to_hiragana("aiueosashisusesonaninunenokakikukeko");
to_hiragana("アイウエオサシスセソナニヌネノカキクケコ");
to_katakana("aiueosashisusesonaninunenokakikukeko");
to_katakana("あいうえおさしすせそなにぬねのかきくけこ");
to_romaji("あいうえおさしすせそなにぬねのかきくけこ");
to_romaji("アイウエオサシスセソナニヌネノカキクケコ");
}
println!("{:?}", start.elapsed().as_millis());
}
```
`node -r esm run.js` *253.231ms*
`cargo run --release --bin bench` *9ms*
### CLI
#### Convert to kana and back for fun and profit
`cargo install wana_kana` will install 2 CLI tools: `to_kana` and `to_romaji`.
Both commands support piping `ls | to_kana` and parameters `to_romaji へろ をるど`.
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/mkan0141/toEmoji
|
2018-02-25T05:50:42Z
|
日本語文を絵文字だけの文に変換するなにか
|
mkan0141 / toEmoji
Public
1 Branch
Tags
Go to file
Go to file
Code
mkan0141 add: LICENCE
c425cb1 · 6 years ago
image
update: Dem…
6 years ago
wikiext…
update: Dem…
6 years ago
.gitignore
update: test用…
6 years ago
LICEN…
add: LICENCE
6 years ago
READ…
add: LICENCE
6 years ago
emoji.j…
update:絵文…
6 years ago
toEmoj…
update: 少し…
6 years ago
与えられた日本語文をそれらしい絵文字だけの文に変換するコ
ンソールアプリケーション
入力された日本語文を形態素解析し、意味のある単語を抽出す
る。
そして、抽出された単語と絵文字と意味の一対一表を用いて類
似度の最も高かった絵文字を出力して文章を生成します。
About
text to 「emoji string」
# emoji # nlp # python3
Readme
MIT license
Activity
4 stars
2 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
toEmoji
Description
README
MIT license
モデルも公開したいのですが容量が大きくて公開できませんで
した...。対策を考えているところです。
モデルの作り方はこのサイトを参考にしました
【Python】Word2Vecの使い方
Install
$ git clone https://github.com/mkan0141/toEmoji
|
# toEmoji
与えられた日本語文をそれらしい絵文字だけの文に変換するコンソールアプリケーション
<img src="./image/sample.png" alt="demo" title="demo">
## Description
入力された日本語文を形態素解析し、意味のある単語を抽出する。
そして、抽出された単語と絵文字と意味の一対一表を用いて類似度の最も高かった絵文字を出力して文章を生成します。
モデルも公開したいのですが容量が大きくて公開できませんでした...。対策を考えているところです。
## Install
```txt
$ git clone https://github.com/mkan0141/toEmoji
```
モデルの作り方はこのサイトを参考にしました
[【Python】Word2Vecの使い方](https://qiita.com/kenta1984/items/93b64768494f971edf86)
|
[
"Syntactic Text Processing",
"Text Generation",
"Text Normalization",
"Text Style Transfer"
] |
[] |
true |
https://github.com/nuko-yokohama/ramendb
|
2018-03-24T11:56:42Z
|
なんとかデータベース( https://supleks.jp/ )からのスクレイピングツールと収集データ
|
nuko-yokohama / ramendb
Public
Branches
Tags
Go to file
Go to file
Code
chatgpt3
data
ddl
query_…
reports
READ…
YLR.sh
YRR.sh
aggs.sh
aggs_i…
close-s…
get_rd…
get_rd…
get_rd…
get_rd…
get_rd…
get_rd…
load_r…
load_s…
load_s…
load_u…
About
なんとかデータベース(
https://supleks.jp/ )からのスクレイ
ピングツールと収集データ
Readme
Activity
7 stars
3 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 82.4%
Shell 17.6%
Code
Issues
2
Pull requests
Actions
Projects
Security
Insights
not-visi…
visited.sh
ラーメンデータベース( https://ramendb.supleks.jp/ )のサ
イトから、
ユーザ
店舗
レビュー
レビューコメント
の内容をスクレイピングするPythonスクリプトと、その
スクリプトで収集したデータ、PostgreSQL用のテーブ
ル定義例をまとめた。
店舗のステータスとして「提供終了」があるので、
それも別に収集すること。
提供終了のサンプル: https://kanagawa-
ramendb.supleks.jp/s/59819.html
ramendb
TODO
README
|
# ramendb
ラーメンデータベース( https://ramendb.supleks.jp/ )のサイトから、
* ユーザ
* 店舗
* レビュー
* レビューコメント
の内容をスクレイピングするPythonスクリプトと、そのスクリプトで収集したデータ、PostgreSQL用のテーブル定義例をまとめた。
## TODO
* 店舗のステータスとして「提供終了」があるので、それも別に収集すること。
* 提供終了のサンプル: https://kanagawa-ramendb.supleks.jp/s/59819.html
|
[] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/DJTB/hatsuon
|
2018-04-06T02:03:00Z
|
Japanese pitch accent utils
|
DJTB / hatsuon
Public
4 Branches
5 Tags
Go to file
Go to file
Code
DJTB feat: use named exports, exp…
8c09a8e · 2 years ago
.github
build: add co…
2 years ago
src
feat: use nam…
2 years ago
.all-con…
chore: genera…
6 years ago
.editor…
chore: initial c…
6 years ago
.eslinti…
chore: initial c…
6 years ago
.eslintr…
build: use roll…
2 years ago
.gitattri…
chore: initial c…
6 years ago
.gitignore
build: use roll…
2 years ago
.prettierrc
build: use roll…
2 years ago
.travis.…
ci: update tra…
2 years ago
babel.c…
build: use roll…
2 years ago
code_…
chore(clean u…
6 years ago
contrib…
chore: initial c…
6 years ago
license
chore: initial c…
6 years ago
packag…
build: use roll…
2 years ago
packag…
build: use roll…
2 years ago
packag…
build: use roll…
2 years ago
readm…
feat: use nam…
2 years ago
rollup.c…
build: use roll…
2 years ago
About
Japanese pitch accent utils
djtb.github.io/hatsuon
# pattern # japanese # pronunciation # pitch
# accent # mora # on # haku
Readme
MIT license
Code of conduct
Activity
25 stars
4 watching
0 forks
Report repository
Releases 2
v2.0.0
Latest
on Mar 14, 2022
+ 1 release
Packages
No packages published
Languages
JavaScript 100.0%
Code
Issues
Pull requests
2
Actions
Projects
Security
Insights
README
Code of conduct
MIT license
Japanese pitch accent tools
npm v2.0.0 downloads 1.7k
Travis branch coverage 100%
commitizen friendly code of conduct
Japanese dictionaries often display the pitch accent of a word with a
single number that determines where the pitch falls. This can be
difficult to mentally visualize without counting through the mora of
the word. This library provides useful tools for generating pitch
patterns which can then be easily displayed via SVG.
Visualization Example
Extra available utils (see source for documentation):
発音 hatsuon
Why?
Installation
npm install --save hatsuon
Demo
Usage
import { hatsuon } from 'hatsuon';
hatsuon({ reading: 'ちゅうがっこう', pitchNum: 3 });
// =>
{
reading: 'ちゅうがっこう',
pitchNum: 3,
morae: ['ちゅ', 'う', 'が', 'っ', 'こ', 'う'],
// low, high, high, low, low, low, low*
// *following particle (は、が, の etc) pitch
pattern: [0, 1, 1, 0, 0, 0, 0],
patternName: '中高', // nakadaka
}
import {
isDigraph,
getMorae,
WanaKana : Japanese romaji <-> kana transliteration library
Thanks goes to these people (emoji key):
💻 📖 🚇 🎨
This project follows the all-contributors specification. Contributions
of any kind welcome!
MIT © Duncan Bay
getMoraCount,
makePitchPattern,
getPitchPatternName,
getPitchNumFromPattern,
} from 'hatsuon';
Related
Contributors
Duncan Bay
License
|
# 発音 hatsuon
> Japanese pitch accent tools
[](https://www.npmjs.com/package/hatsuon)
[](https://npm-stat.com/charts.html?package=hatsuon&from=2016-04-01)
[](https://travis-ci.com/DJTB/hatsuon)
[](https://codecov.io/github/DJTB/hatsuon)
<br />
[](http://commitizen.github.io/cz-cli/)
[](./code_of_conduct.md)
## Why?
Japanese dictionaries often display the pitch accent of a word with a single number that determines where the pitch falls. This can be difficult to mentally visualize without counting through the [mora](<https://en.wikipedia.org/wiki/Mora_(linguistics)#Japanese>) of the word. This library provides useful tools for generating pitch patterns which can then be easily displayed via SVG.
## Installation
```sh
npm install --save hatsuon
```
## Demo
[Visualization Example](https://djtb.github.io/hatsuon)
## Usage
```js
import { hatsuon } from 'hatsuon';
hatsuon({ reading: 'ちゅうがっこう', pitchNum: 3 });
// =>
{
reading: 'ちゅうがっこう',
pitchNum: 3,
morae: ['ちゅ', 'う', 'が', 'っ', 'こ', 'う'],
// low, high, high, low, low, low, low*
// *following particle (は、が, の etc) pitch
pattern: [0, 1, 1, 0, 0, 0, 0],
patternName: '中高', // nakadaka
}
```
Extra available utils (see source for documentation):
```js
import {
isDigraph,
getMorae,
getMoraCount,
makePitchPattern,
getPitchPatternName,
getPitchNumFromPattern,
} from 'hatsuon';
```
## Related
[WanaKana](https://github.com/WaniKani/WanaKana) : Japanese romaji <-> kana transliteration library
## Contributors
Thanks goes to these people ([emoji key](https://github.com/kentcdodds/all-contributors#emoji-key)):
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore -->
| [<img src="https://avatars3.githubusercontent.com/u/5353151?s=100" width="100px;"/><br /><sub><b>Duncan Bay</b></sub>](https://github.com/DJTB)<br />[üíª](https://github.com/DJTB/hatsuon/commits?author=DJTB "Code") [üìñ](https://github.com/DJTB/hatsuon/commits?author=DJTB "Documentation") [üöá](#infra-DJTB "Infrastructure (Hosting, Build-Tools, etc)") [üé®](#design-DJTB "Design") |
| :---: |
<!-- ALL-CONTRIBUTORS-LIST:END -->
This project follows the [all-contributors](https://github.com/kentcdodds/all-contributors) specification. Contributions of any kind welcome!
## License
MIT © [Duncan Bay](https://github.com/DJTB)
|
[
"Phonology",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/maruamyu/imas-ime-dic
|
2018-04-24T17:05:38Z
|
THE IDOLM@STER words dictionary for Japanese IME (by imas-db.jp)
|
maruamyu / imas-ime-dic
Public
Branches
Tags
Go to file
Go to file
Code
dist
.gitignore
CNAME
LICEN…
READ…
conver…
dic.txt
go.mod
index.h…
THE IDOLM@STER (アイドルマスター)にかかわる単語
を記述した日本語IME向けの辞書ファイルです。
アイマスDBが管理を行っています。
Windows
Microsoft IME
メモ帳で編集可能
Android
Google日本語入力
主要なテキストエディタで編集可能
以下の文字が利用可能
About
THE IDOLM@STER words
dictionary for Japanese IME (by
imas-db.jp)
ime.imas-db.jp/
Readme
MIT license
Activity
Custom properties
26 stars
5 watching
10 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
4
Languages
Go 53.3%
HTML 46.7%
Code
Issues
2
Pull requests
Actions
Security
Insights
maruamyu/imas-ime-dic
想定環境
README
MIT license
﨑 … 「赤﨑千夏」など
♥ … 「私はアイドル♥」など
♡ … 「仲良しでいようね♡」など
Ø … 「ØωØver!!」
➚ … 「JOKER➚オールマイティ」
è … 「Cafè Parade」
俠 … 「俠気乱舞」
✿ … 「花ざかりWeekend✿」
上記想定環境から、以下のように決めます。
文字コードは UTF-16 LE
Windowsのメモ帳で扱えるようにするため
BOM付きにする
改行コードは CRLF
リポジトリ内にある convert_dic.go をGo言語でコンパ
イルして実行すると dist/ ディレクトリ以下にファイル
が生成されます。
gboard.zip : Gboard(Android版)の単語リストにイン
ポートするためのファイル
macosx.plist : Mac OS Xの「キーボード」→「ユー
ザー辞書」にドラッグ&ドロップで登録するための
ファイル
skk-jisyo.imas.utf8 : SKK辞書ファイル (AquaSKKで
動作確認済)
2021-10-22 から生成したファイルをコミットするよう
にしました。
リポジトリ内のテキストファイルは、MITライセンス下
で配布されます。
取り決め
他形式へのコンバート
go get
go build convert_dic.go
./convert_dic
License
This repository is released under the MIT License, see
LICENSE
各社の商標または登録商標が含まれる場合があります
が、営利利用を意図したものではありません。
歓迎します。 forkして、新規branchを作成して、pullリ
クエストしてください。
コントリビューション
|
maruamyu/imas-ime-dic
=====================
[THE IDOLM@STER (アイドルマスター)](http://idolmaster.jp/)にかかわる単語を記述した日本語IME向けの辞書ファイルです。
[アイマスDB](https://imas-db.jp/)が管理を行っています。
## 想定環境
- Windows
- Microsoft IME
- メモ帳で編集可能
- Android
- Google日本語入力
- 主要なテキストエディタで編集可能
- 以下の文字が利用可能
- 﨑 … 「赤﨑千夏」など
- ♥ … 「私はアイドル♥」など
- ♡ … 「仲良しでいようね♡」など
- Ø … 「ØωØver!!」
- ➚ … 「JOKER➚オールマイティ」
- è … 「Cafè Parade」
- 俠 … 「俠気乱舞」
- ✿ … 「花ざかりWeekend✿」
## 取り決め
上記想定環境から、以下のように決めます。
- 文字コードは UTF-16 LE
- Windowsのメモ帳で扱えるようにするためBOM付きにする
- 改行コードは CRLF
## 他形式へのコンバート
リポジトリ内にある *convert_dic.go* を[Go言語](https://golang.org/)でコンパイルして実行すると
*dist/* ディレクトリ以下にファイルが生成されます。
- *gboard.zip* : Gboard(Android版)の単語リストにインポートするためのファイル
- *macosx.plist* : Mac OS Xの「キーボード」→「ユーザー辞書」にドラッグ&ドロップで登録するためのファイル
- *skk-jisyo.imas.utf8* : SKK辞書ファイル (AquaSKKで動作確認済)
```bash
go get
go build convert_dic.go
./convert_dic
```
2021-10-22 から生成したファイルをコミットするようにしました。
## License
リポジトリ内のテキストファイルは、MITライセンス下で配布されます。
This repository is released under the MIT License, see [LICENSE](LICENSE)
各社の商標または登録商標が含まれる場合がありますが、営利利用を意図したものではありません。
## コントリビューション
歓迎します。
forkして、新規branchを作成して、pullリクエストしてください。
|
[] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/r9y9/pyopenjtalk
|
2018-08-06T15:35:16Z
|
Python wrapper for OpenJTalk
|
r9y9 / pyopenjtalk
Public
3 Branches
16 Tags
Go to file
Go to file
Code
r9y9 bump new dev ver
a3e2115 · 4 months ago
.github/workflows
drop python 3.7
4 months ago
docs
prep for release
2 years ago
lib
update open_jtalk rev
4 months ago
pyopenjtalk
FIX: remove legacy API
6 months ago
tests
fix tests
last year
.gitignore
Add htsengine module
3 years ago
.gitmodules
add hts_engine_API as a subm…
3 years ago
.travis.yml
Fix travis config
3 years ago
LICENSE.md
init
6 years ago
MANIFEST.in
#21: adjust configs for creat…
3 years ago
README.md
add userdict support in README
last year
pyproject.toml
BLD: remove distutil from setup…
9 months ago
release.sh
Release fixes
6 years ago
setup.cfg
init
6 years ago
setup.py
bump new dev ver
4 months ago
tox.ini
init
6 years ago
pypi
pypi v0.3.4
v0.3.4
Python package
Python package passing
passing build
build passing
passing license
license MIT
MIT DOI
DOI
10.5281/zenodo.12736538
10.5281/zenodo.12736538
A python wrapper for OpenJTalk.
The package consists of two core components:
Text processing frontend based on OpenJTalk
Speech synthesis backend using HTSEngine
The package is built with the modified version of OpenJTalk. The modified version provides the same functionality with some
improvements (e.g., cmake support) but is technically different from the one from HTS working group.
The package also uses the modified version of hts_engine_API. The same applies as above.
Before using the pyopenjtalk package, please have a look at the LICENSE for the two software.
The python package relies on cython to make python bindings for open_jtalk and hts_engine_API. You must need the following tools to build
and install pyopenjtalk:
About
Python wrapper for OpenJTalk
r9y9.github.io/pyopenjtalk/
Readme
View license
Activity
202 stars
7 watching
68 forks
Report repository
Releases 15
v0.3.4
Latest
on Jul 13
+ 14 releases
Packages
No packages published
Contributors
7
Languages
Python 51.3%
Cython 47.2%
Shell 1.5%
Code
Issues
14
Pull requests
5
Actions
Projects
Security
Insights
Fix
pyopenjtalk
Notice
Build requirements
README
License
C/C++ compilers (to build C/C++ extentions)
cmake
cython
Linux
Mac OSX
Windows (MSVC) (see this PR)
To build the package locally, you will need to make sure to clone open_jtalk and hts_engine_API.
and then run
Please check the notebook version here (nbviewer).
Please check lab_format.pdf in HTS-demo_NIT-ATR503-M001.tar.bz2 for more details about full-context labels.
Supported platforms
Installation
pip install pyopenjtalk
Development
git submodule update --recursive --init
pip install -e .
Quick demo
TTS
In [1]: import pyopenjtalk
In [2]: from scipy.io import wavfile
In [3]: x, sr = pyopenjtalk.tts("おめでとうございます")
In [4]: wavfile.write("test.wav", sr, x.astype(np.int16))
Run text processing frontend only
In [1]: import pyopenjtalk
In [2]: pyopenjtalk.extract_fullcontext("こんにちは")
Out[2]:
['xx^xx-sil+k=o/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:xx_xx#xx_xx@xx_xx|xx_xx/G:5_5%0_xx_xx/
'xx^sil-k+o=N/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/
'sil^k-o+N=n/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I
'k^o-N+n=i/A:-3+2+4/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1
'o^N-n+i=ch/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:
'N^n-i+ch=i/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:
'n^i-ch+i=w/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:
'i^ch-i+w=a/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:
'ch^i-w+a=sil/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I
'i^w-a+sil=xx/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I
'w^a-sil+xx=xx/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:xx+xx_xx/E:5_5!0_xx-xx/F:xx_xx#xx_xx@xx_xx|xx_xx/G:xx_xx%xx_xx_xx/H
Grapheme-to-phoeneme (G2P)
In [1]: import pyopenjtalk
In [2]: pyopenjtalk.g2p("こんにちは")
1. Create a CSV file (e.g. user.csv ) and write custom words like below:
2. Call mecab_dict_index to compile the CSV file.
3. Call update_global_jtalk_with_user_dict to apply the user dictionary.
After v0.3.0, the run_marine option has been available for estimating the Japanese accent with the DNN-based method (see marine). If you
want to use the feature, please install pyopenjtalk as below;
And then, you can use the option as the following examples;
pyopenjtalk: MIT license (LICENSE.md)
Open JTalk: Modified BSD license (COPYING)
htsvoice in this repository: Please check pyopenjtalk/htsvoice/README.md.
marine: Apache 2.0 license (LICENSE)
HTS Working Group for their dedicated efforts to develop and maintain Open JTalk.
Out[2]: 'k o N n i ch i w a'
In [3]: pyopenjtalk.g2p("こんにちは", kana=True)
Out[3]: 'コンニチワ'
Create/Apply user dictionary
GNU,,,1,名詞,一般,*,*,*,*,GNU,グヌー,グヌー,2/3,*
In [1]: import pyopenjtalk
In [2]: pyopenjtalk.mecab_dict_index("user.csv", "user.dic")
reading user.csv ... 1
emitting double-array: 100% |###########################################|
done!
In [3]: pyopenjtalk.g2p("GNU")
Out[3]: 'j i i e n u y u u'
In [4]: pyopenjtalk.update_global_jtalk_with_user_dict("user.dic")
In [5]: pyopenjtalk.g2p("GNU")
Out[5]: 'g u n u u'
About run_marine option
pip install pyopenjtalk[marine]
In [1]: import pyopenjtalk
In [2]: x, sr = pyopenjtalk.tts("おめでとうございます", run_marine=True) # for TTS
In [3]: label = pyopenjtalk.extract_fullcontext("こんにちは", run_marine=True) # for text processing frontend only
LICENSE
Acknowledgements
|
# pyopenjtalk
[](https://pypi.python.org/pypi/pyopenjtalk)
[](https://github.com/r9y9/pyopenjtalk/actions/workflows/ci.yaml)
[](https://app.travis-ci.com/r9y9/pyopenjtalk)
[](LICENSE.md)
[](https://zenodo.org/badge/latestdoi/143748865)
A python wrapper for [OpenJTalk](http://open-jtalk.sp.nitech.ac.jp/).
The package consists of two core components:
- Text processing frontend based on OpenJTalk
- Speech synthesis backend using HTSEngine
## Notice
- The package is built with the [modified version of OpenJTalk](https://github.com/r9y9/open_jtalk). The modified version provides the same functionality with some improvements (e.g., cmake support) but is technically different from the one from HTS working group.
- The package also uses the [modified version of hts_engine_API](https://github.com/r9y9/hts_engine_API). The same applies as above.
Before using the pyopenjtalk package, please have a look at the LICENSE for the two software.
## Build requirements
The python package relies on cython to make python bindings for open_jtalk and hts_engine_API. You must need the following tools to build and install pyopenjtalk:
- C/C++ compilers (to build C/C++ extentions)
- cmake
- cython
## Supported platforms
- Linux
- Mac OSX
- Windows (MSVC) (see [this PR](https://github.com/r9y9/pyopenjtalk/pull/13))
## Installation
```
pip install pyopenjtalk
```
## Development
To build the package locally, you will need to make sure to clone open_jtalk and hts_engine_API.
```
git submodule update --recursive --init
```
and then run
```
pip install -e .
```
## Quick demo
Please check the notebook version [here (nbviewer)](https://nbviewer.jupyter.org/github/r9y9/pyopenjtalk/blob/master/docs/notebooks/Demo.ipynb).
### TTS
```py
In [1]: import pyopenjtalk
In [2]: from scipy.io import wavfile
In [3]: x, sr = pyopenjtalk.tts("おめでとうございます")
In [4]: wavfile.write("test.wav", sr, x.astype(np.int16))
```
### Run text processing frontend only
```py
In [1]: import pyopenjtalk
In [2]: pyopenjtalk.extract_fullcontext("こんにちは")
Out[2]:
['xx^xx-sil+k=o/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:xx_xx#xx_xx@xx_xx|xx_xx/G:5_5%0_xx_xx/H:xx_xx/I:xx-xx@xx+xx&xx-xx|xx+xx/J:1_5/K:1+1-5',
'xx^sil-k+o=N/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'sil^k-o+N=n/A:-4+1+5/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'k^o-N+n=i/A:-3+2+4/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'o^N-n+i=ch/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'N^n-i+ch=i/A:-2+3+3/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'n^i-ch+i=w/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'i^ch-i+w=a/A:-1+4+2/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'ch^i-w+a=sil/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'i^w-a+sil=xx/A:0+5+1/B:xx-xx_xx/C:09_xx+xx/D:xx+xx_xx/E:xx_xx!xx_xx-xx/F:5_5#0_xx@1_1|1_5/G:xx_xx%xx_xx_xx/H:xx_xx/I:1-5@1+1&1-1|1+5/J:xx_xx/K:1+1-5',
'w^a-sil+xx=xx/A:xx+xx+xx/B:xx-xx_xx/C:xx_xx+xx/D:xx+xx_xx/E:5_5!0_xx-xx/F:xx_xx#xx_xx@xx_xx|xx_xx/G:xx_xx%xx_xx_xx/H:1_5/I:xx-xx@xx+xx&xx-xx|xx+xx/J:xx_xx/K:1+1-5']
```
Please check `lab_format.pdf` in [HTS-demo_NIT-ATR503-M001.tar.bz2](http://hts.sp.nitech.ac.jp/archives/2.3/HTS-demo_NIT-ATR503-M001.tar.bz2) for more details about full-context labels.
### Grapheme-to-phoeneme (G2P)
```py
In [1]: import pyopenjtalk
In [2]: pyopenjtalk.g2p("こんにちは")
Out[2]: 'k o N n i ch i w a'
In [3]: pyopenjtalk.g2p("こんにちは", kana=True)
Out[3]: 'コンニチワ'
```
### Create/Apply user dictionary
1. Create a CSV file (e.g. `user.csv`) and write custom words like below:
```csv
GNU,,,1,名詞,一般,*,*,*,*,GNU,グヌー,グヌー,2/3,*
```
2. Call `mecab_dict_index` to compile the CSV file.
```python
In [1]: import pyopenjtalk
In [2]: pyopenjtalk.mecab_dict_index("user.csv", "user.dic")
reading user.csv ... 1
emitting double-array: 100% |###########################################|
done!
```
3. Call `update_global_jtalk_with_user_dict` to apply the user dictionary.
```python
In [3]: pyopenjtalk.g2p("GNU")
Out[3]: 'j i i e n u y u u'
In [4]: pyopenjtalk.update_global_jtalk_with_user_dict("user.dic")
In [5]: pyopenjtalk.g2p("GNU")
Out[5]: 'g u n u u'
```
### About `run_marine` option
After v0.3.0, the `run_marine` option has been available for estimating the Japanese accent with the DNN-based method (see [marine](https://github.com/6gsn/marine)). If you want to use the feature, please install pyopenjtalk as below;
```shell
pip install pyopenjtalk[marine]
```
And then, you can use the option as the following examples;
```python
In [1]: import pyopenjtalk
In [2]: x, sr = pyopenjtalk.tts("おめでとうございます", run_marine=True) # for TTS
In [3]: label = pyopenjtalk.extract_fullcontext("こんにちは", run_marine=True) # for text processing frontend only
```
## LICENSE
- pyopenjtalk: MIT license ([LICENSE.md](LICENSE.md))
- Open JTalk: Modified BSD license ([COPYING](https://github.com/r9y9/open_jtalk/blob/1.10/src/COPYING))
- htsvoice in this repository: Please check [pyopenjtalk/htsvoice/README.md](pyopenjtalk/htsvoice/README.md).
- marine: Apache 2.0 license ([LICENSE](https://github.com/6gsn/marine/blob/main/LICENSE))
## Acknowledgements
HTS Working Group for their dedicated efforts to develop and maintain Open JTalk.
|
[
"Phonology",
"Speech & Audio in NLP"
] |
[] |
true |
https://github.com/hkiyomaru/japanese-word-aggregation
|
2018-08-08T04:31:10Z
|
Aggregating Japanese words based on Juman++ and ConceptNet5.5
|
hkiyomaru / japanese-word-aggregation
Public
Branches
Tags
Go to file
Go to file
Code
data
src
.gitignore
LICEN…
READ…
A simple command-line tool to aggregate Japanese
words. The aggregation is based on Juman++[Morita+
15] and ConceptNet5.5[Speer+ 17].
Python 3.6.0
Juman++ (see https://github.com/ku-nlp/jumanpp)
pyknp (see http://nlp.ist.i.kyoto-u.ac.jp/index.php?
PyKNP)
progressbar
zenhan
and their dependencies
First, clone this repository.
About
Aggregating Japanese words based
on Juman++ and ConceptNet5.5
Readme
MIT license
Activity
2 stars
2 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Japanese Word
Aggregation
Development Environment
Getting Started
README
MIT license
Then, enter the repository.
You can see a sample input file which includes one word
per one line.
Run the script for aggregation.
You'll get the result as a tab-separated file which
includes original words and the IDs to aggregate.
$ git clone
https://github.com/kiyomaro927/japanese-
word-aggregation.git
$ cd japanese-word-aggregation/
$ cat data/input.txt
こんにちは
こんにちわ
こんにちは。
こんにちは!
こんにちは!
いぬ
犬
イヌ
鰤大根
ぶり大根
父
お父さん
父親
1年生
1年生
一年生
12月
12月
十二月
1万円
1万円
10000円
10000円
$ python src/aggregate.py data/input.txt
data/output.txt
$ cat data/output.txt
こんにちは
0
こんにちわ
0
こんにちは。
0
こんにちは!
0
MIT
Hajime Morita, Daisuke Kawahara, Sadao
Kurohashi, "Morphological Analysis for
Unsegmented Languages using Recurrent Neural
Network Language Model", EMNLP, 2015.
Robert Speer, Joshua Chin, Catherine Havasi,
"ConceptNet 5.5: An Open Multilingual Graph of
General Knowledge" AAAI 2017
こんにちは!
0
いぬ
1
犬
1
イヌ
1
鰤大根
2
ぶり大根2
父
3
お父さん3
父親
3
1年生
4
1年生
4
一年生
4
12月
5
12月
5
十二月
5
1万円
6
1万円
6
10000円6
10000円
6
Lisence
Reference
|
# Japanese Word Aggregation
A simple command-line tool to aggregate Japanese words.
The aggregation is based on Juman++[Morita+ 15] and ConceptNet5.5[Speer+ 17].
## Development Environment
- Python 3.6.0
- Juman++ (see https://github.com/ku-nlp/jumanpp)
- pyknp (see http://nlp.ist.i.kyoto-u.ac.jp/index.php?PyKNP)
- progressbar
- zenhan
- and their dependencies
## Getting Started
First, clone this repository.
```
$ git clone https://github.com/kiyomaro927/japanese-word-aggregation.git
```
Then, enter the repository.
```
$ cd japanese-word-aggregation/
```
You can see a sample input file which includes one word per one line.
```
$ cat data/input.txt
こんにちは
こんにちわ
こんにちは。
こんにちは!
こんにちは!
いぬ
犬
イヌ
鰤大根
ぶり大根
父
お父さん
父親
1年生
1年生
一年生
12月
12月
十二月
1万円
1万円
10000円
10000円
```
Run the script for aggregation.
```
$ python src/aggregate.py data/input.txt data/output.txt
```
You'll get the result as a tab-separated file which includes original words and the IDs to aggregate.
```
$ cat data/output.txt
こんにちは 0
こんにちわ 0
こんにちは。 0
こんにちは! 0
こんにちは! 0
いぬ 1
犬 1
イヌ 1
鰤大根 2
ぶり大根 2
父 3
お父さん 3
父親 3
1年生 4
1年生 4
一年生 4
12月 5
12月 5
十二月 5
1万円 6
1万円 6
10000円 6
10000円 6
```
## Lisence
- MIT
## Reference
- Hajime Morita, Daisuke Kawahara, Sadao Kurohashi, "Morphological Analysis for Unsegmented Languages using Recurrent Neural Network Language Model", EMNLP, 2015.
- Robert Speer, Joshua Chin, Catherine Havasi, "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge", AAAI, 2017.
|
[
"Information Extraction & Text Mining",
"Relation Extraction"
] |
[] |
true |
https://github.com/ku-nlp/pyknp
|
2018-09-10T17:43:55Z
|
A Python Module for JUMAN++/KNP
|
ku-nlp / pyknp
Public
Branches
Tags
Go to file
Go to file
Code
docs
pyknp
tests
.gitignore
.readth…
AUTH…
CITATI…
COPYI…
Makefile
READ…
poetry.l…
pyproj…
形態素解析器JUMAN++(JUMAN)と構文解析器KNPの
Pythonバインディング (Python2系と3系の両方に対
応)。
Python
About
A Python Module for
JUMAN++/KNP
# nlp # knp # nlp-parsing # juman # jumanpp
Readme
View license
Activity
Custom properties
89 stars
11 watching
23 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
18
+ 4 contributors
Languages
Python 99.8%
Makefile 0.2%
Code
Issues
11
Pull requests
2
Actions
Projects
Security
Insights
pyknp: Python Module for
JUMAN++/KNP
Requirements
README
License
Verified Versions: 2.7.15, 3.7.11
形態素解析器 JUMAN++ [EN] (or JUMAN[EN])
JUMAN++ はJUMANの後継にあたる形態素解
析器
構文解析器 KNP [EN]
https://pyknp.readthedocs.io/en/latest/
京都大学 黒橋・河原研究室 ([email protected]
u.ac.jp)
John Richardson, Tomohide Shibata, Yuta
Hayashibe, Tomohiro Sakaguchi
Installation
$ pip install pyknp
Documents
Authors/Contact
|
# pyknp: Python Module for JUMAN++/KNP
形態素解析器JUMAN++(JUMAN)と構文解析器KNPのPythonバインディング (Python2系と3系の両方に対応)。
## Requirements
- Python
- Verified Versions: 2.7.15, 3.7.11
- 形態素解析器 [JUMAN++](http://nlp.ist.i.kyoto-u.ac.jp/index.php?JUMAN%2B%2B) [[EN](http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN%2B%2B)]
(or [JUMAN](http://nlp.ist.i.kyoto-u.ac.jp/index.php?JUMAN)[[EN](http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN)])
- JUMAN++ はJUMANの後継にあたる形態素解析器
- 構文解析器 [KNP](http://nlp.ist.i.kyoto-u.ac.jp/index.php?KNP) [[EN](http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KNP)]
## Installation
```
$ pip install pyknp
```
## Documents
https://pyknp.readthedocs.io/en/latest/
## Authors/Contact
京都大学 黒橋・河原研究室 ([email protected])
- John Richardson, Tomohide Shibata, Yuta Hayashibe, Tomohiro Sakaguchi
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/rixwew/darts-clone-python
|
2018-11-16T15:57:51Z
|
Darts-clone python binding
|
rixwew / darts-clone-python
Public
Branches
Tags
Go to file
Go to file
Code
.github/…
csrc @ …
dartscl…
test
.gitignore
.gitmod…
LICENSE
MANIF…
READ…
setup.py
Darts-clone binding for Python 3.x.
This repository provides Cython-based pip-installable package.
darts-clone-python is almost compatible with darts-clone.
About
Darts-clone python binding
# python # dart # trie # double-array-trie
Readme
Apache-2.0 license
Activity
20 stars
3 watching
12 forks
Report repository
Releases 4
v0.10.2
Latest
on Apr 5, 2022
+ 3 releases
Packages
No packages published
Contributors
5
Languages
Cython 63.1%
Python 33.7%
Shell 3.2%
Code
Issues
1
Pull requests
2
Actions
Projects
Security
Insights
darts-clone-python
Installation
pip install dartsclone
Usage
README
Apache-2.0 license
import dartsclone
darts = dartsclone.DoubleArray()
# build index
data = [b'apple', b'banana', b'orange']
values = [1, 3, 2]
darts.build(data, values=values)
# exact match search
result = darts.exact_match_search('apple'.encode('utf-8'))
print(result) # [1, 5]
# common prefix search
result = darts.common_prefix_search('apples'.encode('utf-8'), pair_type=False)
print(result) # [1]
# save index
darts.save('sample.dic')
# load index
darts.clear()
darts.open('sample.dic')
# dump array data
array = darts.array()
# load array data
darts.clear()
darts.set_array(array)
|
# darts-clone-python
[Darts-clone](https://github.com/s-yata/darts-clone) binding for Python 3.x.
This repository provides Cython-based pip-installable package.
## Installation
pip install dartsclone
## Usage
darts-clone-python is almost compatible with darts-clone.
```python
import dartsclone
darts = dartsclone.DoubleArray()
# build index
data = [b'apple', b'banana', b'orange']
values = [1, 3, 2]
darts.build(data, values=values)
# exact match search
result = darts.exact_match_search('apple'.encode('utf-8'))
print(result) # [1, 5]
# common prefix search
result = darts.common_prefix_search('apples'.encode('utf-8'), pair_type=False)
print(result) # [1]
# save index
darts.save('sample.dic')
# load index
darts.clear()
darts.open('sample.dic')
# dump array data
array = darts.array()
# load array data
darts.clear()
darts.set_array(array)
```
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/ku-nlp/Winograd-Schema-Challenge-Ja
|
2019-01-25T10:14:28Z
|
Japanese Translation of Winograd Schema Challenge
|
ku-nlp / Winograd-Schema-Challenge-Ja
Public
Branches
Tags
Go to file
Go to file
Code
.gitignore
README.md
test.txt
train.txt
winograd_sc…
Japanese Translation of Winograd Schema Challenge (http://www.hlt.utdallas.edu/~vince/data/emnlp12/)
train: train.txt (1,322 tasks)
test: test.txt ( 564 tasks)
Five lines correspond to one task, which consists of the following four lines (and one blank line).
Example:
About
Japanese Translation of Winograd
Schema Challenge
Readme
Activity
Custom properties
6 stars
6 watching
0 forks
Report repository
Releases
2 tags
Packages
No packages published
Languages
Python 100.0%
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
Winograd-Schema-Challenge-Ja
Dataset
Format
input sentence(s) Japanese translation (comments if any)
target pronoun Japanese translation
antecedent candidates Japanese translation
correct antecedent Japanese translation
"James asked Robert for a favor, but he refused." ジェームズはロバートに頼みごとをした。し
かし彼は断った。
he 彼
"James,Robert" ジェームズ、 ロバート
Robert ロバート
Dataset Reader
$ python winograd_schema_challenge_ja_reader.py --train_file train.txt --test_file test.txt
README
0.2
Check the consistency.
0.1
initial commit
柴田知秀, 小浜翔太郎, 黒橋禎夫: 日本語Winograd Schema Challengeの構築と分析. 言語処理学会 第21回年次大会
(2015.3) (in Japanese). http://www.anlp.jp/proceedings/annual_meeting/2015/pdf_dir/E3-1.pdf
History
Reference
|
# Winograd-Schema-Challenge-Ja
Japanese Translation of Winograd Schema Challenge (http://www.hlt.utdallas.edu/~vince/data/emnlp12/)
## Dataset
- train: train.txt (1,322 tasks)
- test: test.txt ( 564 tasks)
### Format
- Five lines correspond to one task, which consists of the following four lines (and one blank line).
```
input sentence(s) Japanese translation (comments if any)
target pronoun Japanese translation
antecedent candidates Japanese translation
correct antecedent Japanese translation
```
- Example:
```
"James asked Robert for a favor, but he refused." ジェームズはロバートに頼みごとをした。しかし彼は断った。
he 彼
"James,Robert" ジェームズ、 ロバート
Robert ロバート
```
## Dataset Reader
```bash
$ python winograd_schema_challenge_ja_reader.py --train_file train.txt --test_file test.txt
```
## History
- 0.2
- Check the consistency.
- 0.1
- initial commit
## Reference
柴田知秀, 小浜翔太郎, 黒橋禎夫:
日本語Winograd Schema Challengeの構築と分析.
言語処理学会 第21回年次大会 (2015.3) (in Japanese).
http://www.anlp.jp/proceedings/annual_meeting/2015/pdf_dir/E3-1.pdf
|
[
"Commonsense Reasoning",
"Reasoning"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/yagays/embedrank
|
2019-02-01T02:41:39Z
|
Python Implementation of EmbedRank
|
yagays / embedrank
Public
Branches
Tags
Go to file
Go to file
Code
src
web
.dockerignore
.gitignore
LICENSE.md
README.md
dockerfile
requirements.txt
Python Implementaion of "Simple Unsupervised Keyphrase Extraction using Sentence Embeddings"
EmbedRank requires pretrained document embeddings (now doc2vec supported). Please see my blog for using pretrained Japanese doc2vec
models.
(Source: 潜伏キリシタン関連遺産、世界遺産登録 - ウィキニュース)
Set the extracted doc2vec model in model/ directory and run the following commands.
About
Python Implementation of
EmbedRank
Readme
MIT license
Activity
49 stars
3 watching
3 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
EmbedRank
Usage
from gensim.models.doc2vec import Doc2Vec
from embedrank import EmbedRank
from nlp_uitl import tokenize
model = Doc2Vec.load("model/jawiki.doc2vec.dbow300d.model")
embedrank = EmbedRank(model=model, tokenize=tokenize)
text = """バーレーンの首都マナマ(マナーマとも)で現在開催されている
ユネスコ(国際連合教育科学文化機関)の第42回世界遺産委員会は日本の推薦していた
「長崎と天草地方の潜伏キリシタン関連遺産」 (長崎県、熊本県)を30日、
世界遺産に登録することを決定した。文化庁が同日発表した。
日本国内の文化財の世界遺産登録は昨年に登録された福岡県の
「『神宿る島』宗像・沖ノ島と関連遺産群」に次いで18件目。
2013年の「富士山-信仰の対象と芸術の源泉」の文化遺産登録から6年連続となった。"""
In []: embedrank.extract_keyword(text)
[('世界遺産登録', 0.61837685), ('(長崎県', 0.517046), ('ユネスコ(国際連合教育科学文化機関)', 0.5726031), ('潜伏キリシタン関連遺
Docker
$ docker build -t embedrank .
$ docker run --rm -p 8080:8080 --memory 7g -it embedrank
README
MIT license
Caution:
You need to allocate total memory size more than 7GB.
Container size is very large (7.38GB)
$ curl -XPOST "localhost:8080/embedrank" --data-urlencode text='バーレーンの首都マナマ(マナーマとも)で現在開催されている
ユネスコ(国際連合教育科学文化機関)の第42回世界遺産委員会は日本の推薦していた
「長崎と天草地方の潜伏キリシタン関連遺産」 (長崎県、熊本県)を30日、
世界遺産に登録することを決定した。文化庁が同日発表した。
日本国内の文化財の世界遺産登録は昨年に登録された福岡県の
「『神宿る島』宗像・沖ノ島と関連遺産群」に次いで18件目。
2013年の「富士山-信仰の対象と芸術の源泉」の文化遺産登録から6年連続となった。'
-d 'num_keywords=3'
{
"keywords": [
{
"keyword": "世界遺産登録",
"score": "0.58336747"
},
{
"keyword": "天草地方",
"score": "0.52296615"
},
{
"keyword": "首都マナマ(マナーマ",
"score": "0.5126816"
}
]
}
|
# EmbedRank
Python Implementaion of "[Simple Unsupervised Keyphrase Extraction using Sentence Embeddings](https://arxiv.org/abs/1801.04470)"
## Usage
EmbedRank requires pretrained document embeddings (now doc2vec supported). Please see [my blog ](https://yag-ays.github.io/project/pretrained_doc2vec_wikipedia/) for using pretrained Japanese doc2vec models.
```py
from gensim.models.doc2vec import Doc2Vec
from embedrank import EmbedRank
from nlp_uitl import tokenize
model = Doc2Vec.load("model/jawiki.doc2vec.dbow300d.model")
embedrank = EmbedRank(model=model, tokenize=tokenize)
text = """バーレーンの首都マナマ(マナーマとも)で現在開催されている
ユネスコ(国際連合教育科学文化機関)の第42回世界遺産委員会は日本の推薦していた
「長崎と天草地方の潜伏キリシタン関連遺産」 (長崎県、熊本県)を30日、
世界遺産に登録することを決定した。文化庁が同日発表した。
日本国内の文化財の世界遺産登録は昨年に登録された福岡県の
「『神宿る島』宗像・沖ノ島と関連遺産群」に次いで18件目。
2013年の「富士山-信仰の対象と芸術の源泉」の文化遺産登録から6年連続となった。"""
```
```py
In []: embedrank.extract_keyword(text)
[('世界遺産登録', 0.61837685), ('(長崎県', 0.517046), ('ユネスコ(国際連合教育科学文化機関)', 0.5726031), ('潜伏キリシタン関連遺産', 0.544827), ('首都マナマ(マナーマ', 0.4898381)]
```
(Source: [潜伏キリシタン関連遺産、世界遺産登録 \- ウィキニュース](https://ja.wikinews.org/wiki/%E6%BD%9C%E4%BC%8F%E3%82%AD%E3%83%AA%E3%82%B7%E3%82%BF%E3%83%B3%E9%96%A2%E9%80%A3%E9%81%BA%E7%94%A3%E3%80%81%E4%B8%96%E7%95%8C%E9%81%BA%E7%94%A3%E7%99%BB%E9%8C%B2))
## Docker
Set the extracted doc2vec model in `model/` directory and run the following commands.
```sh
$ docker build -t embedrank .
$ docker run --rm -p 8080:8080 --memory 7g -it embedrank
```
```sh
$ curl -XPOST "localhost:8080/embedrank" --data-urlencode text='バーレーンの首都マナマ(マナーマとも)で現在開催されている
ユネスコ(国際連合教育科学文化機関)の第42回世界遺産委員会は日本の推薦していた
「長崎と天草地方の潜伏キリシタン関連遺産」 (長崎県、熊本県)を30日、
世界遺産に登録することを決定した。文化庁が同日発表した。
日本国内の文化財の世界遺産登録は昨年に登録された福岡県の
「『神宿る島』宗像・沖ノ島と関連遺産群」に次いで18件目。
2013年の「富士山-信仰の対象と芸術の源泉」の文化遺産登録から6年連続となった。'
-d 'num_keywords=3'
{
"keywords": [
{
"keyword": "世界遺産登録",
"score": "0.58336747"
},
{
"keyword": "天草地方",
"score": "0.52296615"
},
{
"keyword": "首都マナマ(マナーマ",
"score": "0.5126816"
}
]
}
```
Caution:
- You need to allocate total memory size more than 7GB.
- Container size is very large (7.38GB)
|
[
"Information Extraction & Text Mining",
"Low-Resource NLP",
"Representation Learning",
"Responsible & Trustworthy NLP",
"Semantic Similarity",
"Semantic Text Processing",
"Term Extraction"
] |
[] |
true |
https://github.com/aozorahack/aozorabunko_text
|
2019-02-10T18:04:27Z
|
text-only archives of www.aozora.gr.jp
|
aozorahack / aozorabunko_text
Public
Branches
Tags
Go to file
Go to file
Code
.github/wor…
cards
.gitignore
Gemfile
Gemfile.lock
README.md
青空文庫( https://www.aozora.gr.jp )のサーバ内にある青空文庫形式のテキストのみをテキスト形式のまま集
めたものです。
個別にzipコマンドで展開したりせずにすべてのテキストが読めます。
Download ZIP (200MB以上)
zip形式のファイルで欲しい場合、上記からダウンロードできます。 なお、テキストファイルは全てcardsデ
ィレクトリ内にあります。それ以外は無視してください。
gitレポジトリの取得については、ふつうのgithub repoなので git pullで持ってきてもらえばいいのですが、履
歴不要で最新版だけ欲しい場合は以下のコマンドの方が早いはずです。
個別のファイルには、
https://aozorahack.org/aozorabunko_text/cards/000081/files/45630_txt_23610/45630_txt_23610.txt
のようなURLでアクセスできます(ただし、HTTPレスポンスヘッダのContent-Typeはtext/plain;
charset=utf-8 なのにファイルはShift_JISなので、ブラウザでは文字化けします)。
About
text-only archives of
www.aozora.gr.jp
# aozorabunko # aozora
Readme
Activity
Custom properties
76 stars
3 watching
6 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Ruby 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
aozorabunko_text
ダウンロード
$ git clone --depth 1 https://github.com/aozorahack/aozorabunko_text.git
個別のテキストファイルへのアクセス
README
※ zip内のテキストファイルはタイトルに合わせたファイル名が命名されているのですが、このレポジトリで
はzipファイル名に合わせたファイル名にしています。 例えば、
cards/000005/files/53194_ruby_44732.zip 内のテキストなら
cards/000005/files/53194_ruby_44732/53194_ruby_44732.txt というパスになります。
異なるファイル名にしているのは、元のテキストファイル名は「作家別作品一覧CSV」などにも書かれてお
らず、確認するにはzipファイルの中身を見ないといけないためです。
https://github.com/aozorabunko/aozorabunko の中身を取得して、cardsディレクトリ内にあるzipファイルの
中からtxtファイルを取り出して、同様の階層のディレクトリ内に保存しています。
動作はCircleCI上で行っており、1日1回バッチで動作します。
cardsディレクトリ内のファイルについては、「青空文庫収録ファイルの取り扱い規準」の元でご利用くださ
い。
著作権保護期間が終了しておらず、クリエイティブ・コモンズ・ライセンス等による許諾の元で再配布され
ているファイルも含まれています。ご注意ください。
動作のしくみ
権利関係
|
# aozorabunko_text
青空文庫( https://www.aozora.gr.jp )のサーバ内にある青空文庫形式のテキストのみをテキスト形式のまま集めたものです。
個別にzipコマンドで展開したりせずにすべてのテキストが読めます。
## ダウンロード
[Download ZIP](https://github.com/aozorahack/aozorabunko_text/archive/master.zip) (200MB以上)
zip形式のファイルで欲しい場合、上記からダウンロードできます。
なお、テキストファイルは全てcardsディレクトリ内にあります。それ以外は無視してください。
gitレポジトリの取得については、ふつうのgithub repoなので git pullで持ってきてもらえばいいのですが、履歴不要で最新版だけ欲しい場合は以下のコマンドの方が早いはずです。
```console
$ git clone --depth 1 https://github.com/aozorahack/aozorabunko_text.git
```
### 個別のテキストファイルへのアクセス
個別のファイルには、`https://aozorahack.org/aozorabunko_text/cards/000081/files/45630_txt_23610/45630_txt_23610.txt`のようなURLでアクセスできます(ただし、HTTPレスポンスヘッダのContent-Typeは`text/plain; charset=utf-8`なのにファイルはShift_JISなので、ブラウザでは文字化けします)。
※ zip内のテキストファイルはタイトルに合わせたファイル名が命名されているのですが、このレポジトリではzipファイル名に合わせたファイル名にしています。
例えば、`cards/000005/files/53194_ruby_44732.zip`内のテキストなら`cards/000005/files/53194_ruby_44732/53194_ruby_44732.txt`というパスになります。
異なるファイル名にしているのは、元のテキストファイル名は「作家別作品一覧CSV」などにも書かれておらず、確認するにはzipファイルの中身を見ないといけないためです。
## 動作のしくみ
https://github.com/aozorabunko/aozorabunko の中身を取得して、cardsディレクトリ内にあるzipファイルの中からtxtファイルを取り出して、同様の階層のディレクトリ内に保存しています。
動作はCircleCI上で行っており、1日1回バッチで動作します。
## 権利関係
cardsディレクトリ内のファイルについては、[「青空文庫収録ファイルの取り扱い規準」](https://www.aozora.gr.jp/guide/kijyunn.html)の元でご利用ください。
著作権保護期間が終了しておらず、クリエイティブ・コモンズ・ライセンス等による許諾の元で再配布されているファイルも含まれています。ご注意ください。
|
[] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/davidluzgouveia/kanji-data
|
2019-02-28T05:53:36Z
|
A JSON kanji dataset with updated JLPT levels and WaniKani information
|
davidluzgouveia / kanji-data
Public
Branches
Tags
Go to file
Go to file
Code
docs
tools
.gitignore
LICEN…
READ…
kanji-jo…
kanji-k…
kanji-w…
kanji.json
This repository contains a JSON file combining all of the kanji data that I found
relevant to my studies of the Japanese language. There are also two smaller
variants containing only the kyouiku and jouyou subsets.
Most of the data is the same as the KANJIDIC dataset, but converted to JSON for
ease of use, stripped of information I didn't need, and extended with updated JLPT
levels and WaniKani content.
All of the data was extracted and processed using only scripts, which should
decrease the chances of human error - unless there is some bug in the code, in
which case it will be easy to fix and regenerate the data.
The Python scripts used to extract and organize all of the data are also provided.
Even if my choice of fields does not match your requirements, the scripts might still
be useful to extract what you need.
About
A JSON kanji dataset with updated
JLPT levels and WaniKani
information
# python # json # japanese # wanikani # kanji
# japanese-language # jlpt # japanese-study
Readme
MIT license
Activity
147 stars
6 watching
20 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
1
Actions
Security
Insights
Kanji Data
README
MIT license
Note: Some of the meanings and readings that were extracted from WaniKani
have a ^ or a ! prefix. I added these to denote when an item is not a primary
answer ( ^ ) or not an accepted answer ( ! ) on WaniKani. These characters
don't appear at the start of any other string in the dataset, so if you prefer to
remove them, you can do a simple search and replace from "^ and "! to " .
Here's what a single entry in the file looks like:
Many of these fields can be null so be weary of that. For instance, there are
entries that don't exist in WaniKani, or that are not part of the JLPT sets, so those
fields will be null .
All of the data comes from the following sources:
Kanji: KANJIDIC
JLPT: Jonathan Waller's JLPT Resources page
WaniKani: WaniKani API
Example
"勝": {
"strokes": 12,
"grade": 3,
"freq": 185,
"jlpt_old": 2,
"jlpt_new": 3,
"meanings": ["Victory","Win","Prevail","Excel"],
"readings_on": ["しょう"],
"readings_kun": ["か.つ","-が.ち","まさ.る","すぐ.れる","かつ"],
"wk_level": 9,
"wk_meanings": ["Win"],
"wk_readings_on": ["しょう"],
"wk_readings_kun": ["!か"],
"wk_radicals": ["Moon","Gladiator","Power"]
}
References
|
# Kanji Data
This repository contains a [JSON file](kanji.json) combining all of the kanji data that I found relevant to my studies of the Japanese language. There are also two smaller variants containing only the [kyouiku](kanji-kyouiku.json) and [jouyou](kanji-jouyou.json) subsets.
Most of the data is the same as the KANJIDIC dataset, but converted to JSON for ease of use, stripped of information I didn't need, and extended with updated JLPT levels and WaniKani content.
All of the data was extracted and processed using only scripts, which should decrease the chances of human error - unless there is some bug in the code, in which case it will be easy to fix and regenerate the data.
The Python scripts used to extract and organize all of the data are also provided. Even if my choice of fields does not match your requirements, the scripts might still be useful to extract what you need.
> Note: Some of the meanings and readings that were extracted from WaniKani have a `^` or a `!` prefix. I added these to denote when an item is *not a primary answer* (`^`) or *not an accepted answer* (`!`) on WaniKani. These characters don't appear at the start of any other string in the dataset, so if you prefer to remove them, you can do a simple search and replace from `"^` and `"!` to `"`.
## Example
Here's what a single entry in the file looks like:
```json
"勝": {
"strokes": 12,
"grade": 3,
"freq": 185,
"jlpt_old": 2,
"jlpt_new": 3,
"meanings": ["Victory","Win","Prevail","Excel"],
"readings_on": ["しょう"],
"readings_kun": ["か.つ","-が.ち","まさ.る","すぐ.れる","かつ"],
"wk_level": 9,
"wk_meanings": ["Win"],
"wk_readings_on": ["しょう"],
"wk_readings_kun": ["!か"],
"wk_radicals": ["Moon","Gladiator","Power"]
}
```
Many of these fields can be `null` so be weary of that. For instance, there are entries that don't exist in WaniKani, or that are not part of the JLPT sets, so those fields will be `null`.
## References
All of the data comes from the following sources:
- Kanji: [KANJIDIC](http://www.edrdg.org/wiki/index.php/KANJIDIC_Project)
- JLPT: [Jonathan Waller's JLPT Resources page](http://www.tanos.co.uk/jlpt/)
- WaniKani: [WaniKani API](https://docs.api.wanikani.com/)
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/Machine-Learning-Tokyo/EN-JP-ML-Lexicon
|
2019-05-27T07:29:35Z
|
This is a English-Japanese lexicon for Machine Learning and Deep Learning terminology.
|
Machine-Learning-Tokyo / EN-JP-ML-Lexicon
Public
Branches
Tags
Go to file
Go to file
Code
READ…
This is an English-Japanese lexicon for Machine Learning and Deep Learning terminology, based
on the translation work for the Machine Learning and Deep Learning cheatsheets created by
@afshinea for Stanford's CS 229 Machine Learning and CS 230 Deep Learning. We have included
the Japanese cheat sheet translations that were created and reviewed by a team of MLT members
for each topic.
Yoshiyuki Nakai, Yuta Kanzawa, Hideaki Hamano, Tran Tuan Anh, Takatoshi Nao, Kamuela Lau,
Rob Altena, Wataru Oniki and Suzana Ilic.
日本語のチートシート
English
日本語
Adaptive learning rates
適応学習率
Analytical gradient
解析的勾配
Architecture
アーキテクチャ
Backpropagation
誤差逆伝播法
About
This is a English-Japanese lexicon
for Machine Learning and Deep
Learning terminology.
Readme
Activity
Custom properties
33 stars
4 watching
9 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
8
Code
Issues
Pull requests
1
Actions
Projects
Security
Insights
Machine Learning and Deep Learning: EN-JP
Lexicon
Translation, review work and lexicon creation done by:
Deep Learning
DL tips and tricks
README
English
日本語
Batch normalization
バッチ正規化
Binary classification
二項分類
Calculation
計算
Chain rule
連鎖律
Coefficients
係数
Color shift
カラーシフト
Contrast change
コントラスト(鮮やかさ)の修正
Convolution layer
畳み込み層
Cross-entropy loss
交差エントロピー誤差
Dampens oscillations
振動を抑制する
Data augmentation
データ拡張
Data processing
データ処理
Deep learning
深層学習
Derivative
微分
Dropout
Dropout (ドロップアウト)
Early stopping
Early stopping (学習の早々な終了)
Epoch
エポック
Error
損失
Evaluation
評価
Finding optimal weights
最適な重みの探索
Flip
反転
Forward propagation
順伝播
Fully connected layer
全結合層
Gradient checking
勾配チェック
Gradient descent
勾配降下法
Gradient of the loss
損失の勾配
Hyperparameter
ハイパーパラメータ
Improvement to SGD
SGDの改良
Information loss
情報損失
English
日本語
Learning algorithm
学習アルゴリズム
Learning rate
学習率
Loss function
損失関数
Mini-batch
ミニバッチ
Momentum
Momentum(運動量)
Neural network training
ニューラルネットワークの学習
Noise addition
ノイズの付加
Non-linear layer
非線形層
Numerical gradient
数値的勾配
Optimizing convergence
収束の最適化
Output
出力
Overfitting
過学習
Parameter tuning
パラメータチューニング
Parametrize
パラメータ化する
Pre-trained weights
学習済みの重み
Prevent overfitting
過学習を避けるために
Random crop
ランダムな切り抜き
Regularization
正規化
Root Mean Square propagation
二乗平均平方根のプロパゲーション
Rotation
回転
Transfer learning
転移学習
Type
種類
Updating weights
重み更新
Validation loss
バリデーションの損失
Weight regularization
重みの正規化
Weights initialization
重みの初期化
Xavier initialization
Xavier初期化
English
日本語
Activation
活性化
Activation functions
活性化関数
Activation map
活性化マップ
Anchor box
アンカーボックス
Architecture
アーキテクチャ
Average pooling
平均プーリング
Bias
バイアス
Bounding box
バウンディングボックス
Computational trick architectures
計算トリックアーキテクチャ
Convolution
畳み込み
Convolution layer
畳み込み層
Convolutional Neural Networks
畳み込みニューラルネットワーク
Deep Learning
深層学習
Detection
検出
Dimensions
次元
Discriminative model
識別モデル
Face verification/recognition
顔認証/認識
Feature map
特徴マップ
Filter hyperparameters
フィルタハイパーパラメタ
Fine tuning
ファインチューニング
Flatten
平滑化
Fully connected
全結合
Generative Adversarial Net
敵対的生成ネットワーク
Generative model
生成モデル
Gram matrix
グラム行列
Image classification
画像分類
Inception Network
インセプションネットワーク
Convolutional Neural Nets
English
日本語
Intersection over Union
和集合における共通部分の割合 (IoU)
Layer
層
Localization
位置特定
Max pooling
最大プーリング
Model complexity
モデルの複雑さ
Neural style transfer
ニューラルスタイル変換
Noise
ノイズ
Non-linearity
非線形性
Non-max suppression
非極大抑制
Object detection
オブジェクト検出
Object recognition
物体認識
One Shot Learning
One Shot学習
Padding
パディング
Parameter compatibility
パラメータの互換性
Pooling
プーリング
R-CNN
R-CNN
Receptive field
受容野
Rectified Linear Unit
正規化線形ユニット (ReLU)
Residual Network (ResNet)
残差ネットワーク (ResNet)
Segmentation
セグメンテーション
Siamese Network
シャムネットワーク
Softmax
ソフトマックス
Stride
ストライド
Style matrix
スタイル行列
Style/content cost function
スタイル/コンテンツコスト関数
Training set
学習セット
Triplet loss
トリプレット損失
Tuning hyperparameters
ハイパーパラメータの調整
You Only Look Once (YOLO)
YOLO
English
日本語
1-hot representation
1-hot 表現
A Conditional language model
条件付き言語モデル
A language model
言語モデル
Amount of attention
注意量
Attention model
アテンションモデル
Beam search
ビームサーチ
Bidirectional RNN
双方向 RNN
Binary classifiers
バイナリ分類器
Bleu score
ブルースコア(機械翻訳比較スコア)
Brevity penalty
簡潔さへのペナルティ
CBOW
CBOW
Co-occurence matrix
共起行列
Conditional probabilities
条件付き確率
Cosine similarity
コサイン類似度
Deep RNN
ディープ RNN
Embedding Matrix
埋め込み行列
Exploding gradient
勾配爆発
Forget gate
忘却ゲート
GloVe
グローブ
Gradient clipping
勾配クリッピング
GRU
ゲート付き回帰型ユニット
Length normalization
言語長正規化
Length normalization
文章の長さの正規化
Likelihood
可能性
Long term/ dependencies
長期依存性関係
LSTM
長・短期記憶
Machine translation
機会翻訳
Recurrent Neural Nets
English
日本語
Motivation and notations
動機と表記
Multiplicative gradien
掛け算の勾配
N-gram
n-gram
Naive greedy search
単純な貪欲法
Negative sampling
ネガティブサンプリング
Notations
ノーテーション
Output gate
出力ゲート
Perplexity
パープレキシティ
Relevance gate
関連ゲート
Skip-gram
スキップグラム
Skip-gram
スキップグラム
Softener
緩衝パラメータ
t-SNE
t-SNE
Target/context likelihood model
ターゲット/コンテキスト尤度モデル
Update gate
更新ゲート
Vanishing gradient
勾配喪失
Weighting function
重み関数
Word Embedding
単語埋め込み
Word2vec
Word2vec
English
日本語
Adaptive boosting
適応的ブースティング
Batch gradient descent
バッチ勾配降下法
Bayes' rule
ベイズの定理
Bernoulli
ベルヌーイ
Bernoulli distribution
ベルヌーイ分布
Machine Learning
Supervised Learning
English
日本語
Bias
バイア
Binary trees
二分木
Boosting
ブースティング
Boosting step
ブースティングステップ
Canonical parameter
正準パラメータ
Categorical variable
カテゴリ変数
Chernoff bound
チェルノフ上界
Class
クラス
Classification
分類
Classification and Regression Trees (CART)
分類・回帰ツリー (CART)
Classifier
分類器
Closed form solution
閉形式の解
Coefficients
係数
Confusion matrix
混同行列
Continuous values
連続値
Cost function
コスト関数
Cross-entropy
クロスエントロピー
Cross validation
交差検証 / クロスバリデーション
Decision boundary
決定境界
Decision trees
決定ツリー
Discriminative model
判別モデル
Distribution
分布
Empirical error
経験誤差
Ensemble methods
アンサンブル学習
Error rate
誤答率
Estimation
推定
Exponential distributions
般的な指数分布族
Exponential family
指数分布族 ― 正準パラメータ
Feature engineering
特徴量エンジニアリング
English
日本語
Feature mapping
特徴写像
Features
特徴
Framework
フレームワーク
Function
関数
Gaussian
ガウス
Gaussian Discriminant Analysis
ガウシアン判別分析
Gaussian kernel
ガウシアンカーネル
Generalized Linear Models
一般化線形モデル
Generative Learning
生成学習
Generative model
生成モデル
Geometric
幾何
Good performance
的に良い性能
Gradient boosting
勾配ブースティング
Gradient descent
勾配降下法
Highly uninterpretable
解釈しにくい
Hinge loss
ヒンジ損失
Hoeffding inequality
ヘフディング不等式
Hold out
ホールドアウト
Hypothesis
仮説
Independent
独立
Input
入力
Interpretable
解釈しやすい
k-nearest neighbors (k-NN)
k近傍法 (k-NN)
Kernel
カーネル
Kernel mapping
カーネル写像
Kernel trick
カーネルトリック
Lagrange multipliers
ラグランジュ乗数
Lagrangian
ラグランジアン
Learning Theory
学習理論
English
日本語
Least Mean Squares
最小2乗法
Least squared error
最小2乗誤差
Likelihood
尤度
Linear classifier
線形分類器
Linear discriminant analysis
線形判別分析(LDA)
Linear models
線形モデル
Linear regression
線形回帰
Link function
リンク関数
Locally Weighted Regression
局所重み付き回帰
Log-likelihood
対数尤度
Logistic loss
ロジスティック損失
Logistic regression
ロジスティック回帰
Loss function
損失関数
Matrix
行列
Maximizing the likelihood
尤度を最大にする
Minimum distance
最短距離
Misclassification
誤分類
Missing value
欠損値
Multi-class logistic regression
多クラス分類ロジスティック回帰
Multi-label classification
多ラベル分類 / マルチラベル分類
Multidimensional generalization
高次元正則化
Naive Bayes
ナイーブベイズ
Natural parameter
自然パラメータ
Non-linear separability
非線形分離問題
Non-parametric approaches
ノン・パラメトリックな手法
Normal equations
正規方程式
Normalization parameter
正規化定数
Numerical variable
数値変数
Optimal margin classifier
最適マージン分類器
English
日本語
Optimal parameters
最適なパラメータ
Optimization
最適化
Optimization problem
最適化問題
Ordinary least squares
最小2乗回帰
Output
出力
Parameter
パラメータ
Parameter update
パラメータ更新
Poisson
ポワソン
Prediction
予測
Probability
確率
Probability distributions of the data
データの確率分布
Probably Approximately Correct (PAC)
確率的に近似的に正しい (PAC)
Random forest
ランダムフォレスト
Random variable
ランダムな変数
Randomly selected features
ランダムに選択された特徴量
Recommendation
レコメンデーション
Regression
回帰
Sample mean
標本平均
Shattering
細分化
Sigmoid function
シグモイド関数
Softmax regression
ソフトマックス回帰
Spam detection
スパム検知
Stochastic gradient descent
確率的勾配降下法
Supervised Learning
教師あり学習
Support Vector Machine (SVM)
サポートベクターマシン
Text classification
テキスト分類
To maximize
最大化する
To minimize
最小化する
To predict
予測する
English
日本語
Training data
学習データ
Training error
学習誤差
Tree-based methods
ツリーベース学習
Union bound
和集合上界
Update rule
更新ルール
Upper bound theorem
上界定理
Vapnik-Chervonenkis (VC) dimension
ヴァプニク・チェルヴォーネンキス次元 (VC)
Variables
変数
Variance
分散
Weights
重み
English
日本語
Agglomerative hierarchical
凝集階層
Average linkage
平均リンケージ
Bell and Sejnowski ICA algorithm
ベルとシノスキーのICAアルゴリズム
Calinski-Harabaz index
Calinski-Harabazインデックス
Centroids
重心
Clustering
クラスタリング
Clustering assessment metrics
クラスタリング評価指標
Complete linkage
完全リンケージ
Convergence
収束
Diagonal
Diagonal
Dimension reduction
次元削減
Dispersion matrices
分散行列
Distortion function
ひずみ関数
E-step
E-ステップ
Eigenvalue
固有値
Eigenvector
固有ベクトル
Unsupervised Learning
English
日本語
Expectation-Maximization
期待値最大化法
Factor analysis
因子分析
Gaussians initialization
ガウス分布初期化
Hierarchical clustering
階層的クラスタリング
Independent component analysis (ICA)
独立成分分析
Jensen's inequality
イェンセンの不等式
K-means clustering
K平均法
Latent variables
潜在変数
M-step
M-ステップ
Means initialization
平均の初期化
Orthogonal matrix
実直交行列
Posterior probabilities
事後確率
Principal components
主成分
Principal component analysis (PCA)
主成分分析
Random variables
ランダムな変数
Silhouette coefficient
シルエット係数
Spectral theorem
スペクトル定理
Unmixing matrix
非混合行列
Unsupervised learning
教師なし学習
Ward linkage
ウォードリンケージ
English
日本語
Axiom
公理
Bayes' rule
ベイズの定理
Boundary
境界
Characteristic function
特性関数
Chebyshev's inequality
チェビシェフの不等式
Chi-square statistic
カイ二乗統計量
Probabilities and Statistics
English
日本語
Combinatorics
組合せ
Conditional Probability
条件付き確率
Continuous
連続
Cumulative distribution function (CDF)
累積分布関数
Cumulative function
累積関数
Discrete
離散
Distribution
分布
Event
事象
Expected value
期待値
Generalized expected value
一般化した期待値
Jointly Distributed Random Variables
同時分布の確率変数
Leibniz integral rule
ライプニッツの積分則
Marginal density
周辺密度
Mutual information
相互情報量
Mutually exclusive events
互いに排反な事象
Order
順番
Partition
分割
Pearson correlation coefficient
相関係数 (ピアソンの積率相関係数)
Permutation
順列
Probability
確率
Probability density function (PDF)
確率密度関数
Probability distribution
確率分布
Random variable
確率変数
Result
結果
Sample space
標本空間
Sequence
数列
Spearman's rank correlation coefficient
スピアマンの順位相関係数
Standard deviation
標準偏差
Standard error
標準誤差
English
日本語
Statistics
統計
Subset
部分集合
Type
種類
Variance
分散
Weighted mean
加重平均
English
日本語
Antisymmetric
反対称
Calculus
微積分
Column
列
Column-vector
列ベクトル
Algebra and Calculus
|
# Machine Learning and Deep Learning: EN-JP Lexicon
This is an English-Japanese lexicon for Machine Learning and Deep Learning terminology, based on the translation work for the [Machine Learning](https://github.com/afshinea/stanford-cs-229-machine-learning) and [Deep Learning cheatsheets](https://github.com/afshinea/stanford-cs-230-deep-learning) created by [@afshinea](https://github.com/afshinea) for Stanford's CS 229 Machine Learning and CS 230 Deep Learning. We have included the Japanese cheat sheet translations that were created and reviewed by a team of MLT members for each topic.
## Translation, review work and lexicon creation done by:
[Yoshiyuki Nakai](https://github.com/yoshiyukinakai/), [Yuta Kanzawa](https://ytknzw.github.io/), Hideaki Hamano, Tran Tuan Anh, [Takatoshi Nao](https://github.com/nao0811ta), Kamuela Lau, Rob Altena, Wataru Oniki and [Suzana Ilic](https://www.linkedin.com/in/suzanailic/).
# Deep Learning
## DL tips and tricks
- [日本語のチートシート](https://github.com/shervinea/cheatsheet-translation/blob/master/ja/cs-230-deep-learning-tips-and-tricks.md)
| English | 日本語 |
|:--- |:--------------------------- |
| Adaptive learning rates | 適応学習率 |
| Analytical gradient | 解析的勾配 |
| Architecture | アーキテクチャ |
| Backpropagation | 誤差逆伝播法 |
| Batch normalization | バッチ正規化 |
| Binary classification | 二項分類 |
| Calculation | 計算 |
| Chain rule | 連鎖律 |
| Coefficients | 係数 |
| Color shift | カラーシフト |
| Contrast change | コントラスト(鮮やかさ)の修正 |
| Convolution layer | 畳み込み層 |
| Cross-entropy loss | 交差エントロピー誤差 |
| Dampens oscillations | 振動を抑制する |
| Data augmentation | データ拡張 |
| Data processing | データ処理 |
| Deep learning | 深層学習 |
| Derivative | 微分 |
| Dropout | Dropout (ドロップアウト) |
| Early stopping | Early stopping (学習の早々な終了) |
| Epoch | エポック |
| Error | 損失 |
| Evaluation | 評価 |
| Finding optimal weights | 最適な重みの探索 |
| Flip | 反転 |
| Forward propagation | 順伝播 |
| Fully connected layer | 全結合層 |
| Gradient checking | 勾配チェック |
| Gradient descent | 勾配降下法 |
| Gradient of the loss | 損失の勾配 |
| Hyperparameter | ハイパーパラメータ |
| Improvement to SGD | SGDの改良 |
| Information loss | 情報損失 |
| Learning algorithm | 学習アルゴリズム |
| Learning rate | 学習率 |
| Loss function | 損失関数 |
| Mini-batch | ミニバッチ |
| Momentum | Momentum(運動量)|
| Neural network training | ニューラルネットワークの学習 |
| Noise addition | ノイズの付加 |
| Non-linear layer | 非線形層 |
| Numerical gradient | 数値的勾配 |
| Optimizing convergence | 収束の最適化 |
| Output | 出力 |
| Overfitting | 過学習 |
| Parameter tuning | パラメータチューニング |
| Parametrize | パラメータ化する |
| Pre-trained weights | 学習済みの重み |
| Prevent overfitting | 過学習を避けるために |
| Random crop | ランダムな切り抜き |
| Regularization | 正規化 |
| Root Mean Square propagation | 二乗平均平方根のプロパゲーション |
| Rotation | 回転 |
| Transfer learning | 転移学習 |
| Type | 種類 |
| Updating weights | 重み更新 |
| Validation loss | バリデーションの損失 |
| Weight regularization | 重みの正規化 |
| Weights initialization | 重みの初期化 |
| Xavier initialization | Xavier初期化 |
## Convolutional Neural Nets
| English | 日本語 |
|:-------------------|:-----------------------|
| Activation | 活性化 |
| Activation functions | 活性化関数 |
| Activation map | 活性化マップ |
| Anchor box | アンカーボックス |
| Architecture | アーキテクチャ |
| Average pooling | 平均プーリング |
| Bias | バイアス |
| Bounding box | バウンディングボックス |
| Computational trick architectures | 計算トリックアーキテクチャ |
| Convolution | 畳み込み |
| Convolution layer | 畳み込み層 |
| Convolutional Neural Networks | 畳み込みニューラルネットワーク |
| Deep Learning | 深層学習 |
| Detection | 検出 |
| Dimensions | 次元 |
| Discriminative model | 識別モデル |
| Face verification/recognition | 顔認証/認識 |
| Feature map | 特徴マップ |
| Filter hyperparameters | フィルタハイパーパラメタ |
| Fine tuning | ファインチューニング |
| Flatten | 平滑化 |
| Fully connected | 全結合 |
| Generative Adversarial Net | 敵対的生成ネットワーク |
| Generative model | 生成モデル |
| Gram matrix | グラム行列 |
| Image classification | 画像分類 |
| Inception Network | インセプションネットワーク |
| Intersection over Union | 和集合における共通部分の割合 (IoU) |
| Layer | 層 |
| Localization | 位置特定 |
| Max pooling | 最大プーリング |
| Model complexity | モデルの複雑さ|
| Neural style transfer | ニューラルスタイル変換 |
| Noise | ノイズ |
| Non-linearity | 非線形性 |
| Non-max suppression | 非極大抑制 |
| Object detection | オブジェクト検出 |
| Object recognition | 物体認識 |
| One Shot Learning | One Shot学習 |
| Padding | パディング |
| Parameter compatibility | パラメータの互換性 |
| Pooling | プーリング |
| R-CNN | R-CNN |
| Receptive field | 受容野 |
| Rectified Linear Unit | 正規化線形ユニット (ReLU) |
| Residual Network (ResNet) | 残差ネットワーク (ResNet) |
| Segmentation | セグメンテーション |
| Siamese Network | シャムネットワーク |
| Softmax | ソフトマックス |
| Stride | ストライド |
| Style matrix | スタイル行列 |
| Style/content cost function | スタイル/コンテンツコスト関数 |
| Training set | 学習セット |
| Triplet loss | トリプレット損失 |
| Tuning hyperparameters | ハイパーパラメータの調整 |
| You Only Look Once (YOLO) | YOLO |
## Recurrent Neural Nets
| English | 日本語 |
|:-------------------|:-----------------------|
| 1-hot representation | 1-hot 表現 |
| A Conditional language model | 条件付き言語モデル |
| A language model | 言語モデル |
| Amount of attention | 注意量 |
| Attention model | アテンションモデル |
| Beam search | ビームサーチ |
| Bidirectional RNN | 双方向 RNN |
| Binary classifiers | バイナリ分類器 |
| Bleu score | ブルースコア(機械翻訳比較スコア) |
| Brevity penalty | 簡潔さへのペナルティ |
| CBOW | CBOW |
| Co-occurence matrix | 共起行列 |
| Conditional probabilities | 条件付き確率 |
| Cosine similarity | コサイン類似度 |
| Deep RNN | ディープ RNN |
| Embedding Matrix | 埋め込み行列 |
| Exploding gradient | 勾配爆発 |
| Forget gate | 忘却ゲート |
| GloVe | グローブ |
| Gradient clipping | 勾配クリッピング |
| GRU | ゲート付き回帰型ユニット |
| Length normalization | 言語長正規化 |
| Length normalization | 文章の長さの正規化 |
| Likelihood | 可能性 |
| Long term/ dependencies | 長期依存性関係 |
| LSTM | 長・短期記憶 |
| Machine translation | 機会翻訳 |
| Motivation and notations | 動機と表記 |
| Multiplicative gradien| 掛け算の勾配 |
| N-gram | n-gram |
| Naive greedy search | 単純な貪欲法 |
| Negative sampling | ネガティブサンプリング |
| Notations | ノーテーション |
| Output gate | 出力ゲート |
| Perplexity | パープレキシティ |
| Relevance gate | 関連ゲート |
| Skip-gram | スキップグラム |
| Skip-gram | スキップグラム |
| Softener | 緩衝パラメータ |
| t-SNE | t-SNE |
| Target/context likelihood model | ターゲット/コンテキスト尤度モデル |
| Update gate | 更新ゲート |
| Vanishing gradient | 勾配喪失 |
| Weighting function | 重み関数 |
| Word Embedding | 単語埋め込み |
| Word2vec | Word2vec |
# Machine Learning
## Supervised Learning
| English | 日本語 |
|:--- |:--------------------------- |
| Adaptive boosting | 適応的ブースティング |
| Batch gradient descent | バッチ勾配降下法 |
| Bayes' rule | ベイズの定理 |
| Bernoulli | ベルヌーイ |
| Bernoulli distribution | ベルヌーイ分布 |
| Bias | バイア |
| Binary trees | 二分木 |
| Boosting | ブースティング |
| Boosting step | ブースティングステップ |
| Canonical parameter | 正準パラメータ |
| Categorical variable | カテゴリ変数 |
| Chernoff bound | チェルノフ上界 |
| Class | クラス |
| Classification | 分類 |
| Classification and Regression Trees (CART) | 分類・回帰ツリー (CART) |
| Classifier | 分類器 |
| Closed form solution | 閉形式の解 |
| Coefficients | 係数 |
| Confusion matrix | 混同行列 |
| Continuous values | 連続値 |
| Cost function | コスト関数 |
| Cross-entropy | クロスエントロピー |
| Cross validation | 交差検証 / クロスバリデーション|
| Decision boundary | 決定境界 |
| Decision trees | 決定ツリー |
| Discriminative model | 判別モデル |
| Distribution | 分布 |
| Empirical error | 経験誤差 |
| Ensemble methods | アンサンブル学習 |
| Error rate | 誤答率 |
| Estimation | 推定 |
| Exponential distributions | 般的な指数分布族 |
| Exponential family | 指数分布族 ― 正準パラメータ |
| Feature engineering | 特徴量エンジニアリング |
| Feature mapping | 特徴写像 |
| Features | 特徴 |
| Framework | フレームワーク |
| Function | 関数 |
| Gaussian | ガウス |
| Gaussian Discriminant Analysis | ガウシアン判別分析 |
| Gaussian kernel | ガウシアンカーネル |
| Generalized Linear Models | 一般化線形モデル |
| Generative Learning | 生成学習 |
| Generative model | 生成モデル |
| Geometric | 幾何 |
| Good performance | 的に良い性能 |
| Gradient boosting | 勾配ブースティング |
| Gradient descent | 勾配降下法 |
| Highly uninterpretable | 解釈しにくい |
| Hinge loss | ヒンジ損失 |
| Hoeffding inequality | ヘフディング不等式 |
| Hold out | ホールドアウト |
| Hypothesis | 仮説 |
| Independent | 独立 |
| Input | 入力 |
| Interpretable | 解釈しやすい |
| k-nearest neighbors (k-NN) | k近傍法 (k-NN) |
| Kernel | カーネル |
| Kernel mapping | カーネル写像 |
| Kernel trick | カーネルトリック |
| Lagrange multipliers | ラグランジュ乗数 |
| Lagrangian | ラグランジアン |
| Learning Theory | 学習理論 |
| Least Mean Squares | 最小2乗法 |
| Least squared error | 最小2乗誤差 |
| Likelihood | 尤度 |
| Linear classifier | 線形分類器 |
| Linear discriminant analysis | 線形判別分析(LDA) |
| Linear models | 線形モデル |
| Linear regression | 線形回帰 |
| Link function | リンク関数 |
| Locally Weighted Regression | 局所重み付き回帰 |
| Log-likelihood | 対数尤度 |
| Logistic loss | ロジスティック損失 |
| Logistic regression | ロジスティック回帰 |
| Loss function | 損失関数 |
| Matrix | 行列 |
| Maximizing the likelihood | 尤度を最大にする |
| Minimum distance | 最短距離 |
| Misclassification | 誤分類 |
| Missing value | 欠損値 |
| Multi-class logistic regression | 多クラス分類ロジスティック回帰 |
| Multi-label classification | 多ラベル分類 / マルチラベル分類 |
| Multidimensional generalization | 高次元正則化 |
| Naive Bayes | ナイーブベイズ |
| Natural parameter | 自然パラメータ |
| Non-linear separability | 非線形分離問題 |
| Non-parametric approaches | ノン・パラメトリックな手法 |
| Normal equations | 正規方程式 |
| Normalization parameter | 正規化定数 |
| Numerical variable | 数値変数 |
| Optimal margin classifier | 最適マージン分類器 |
| Optimal parameters | 最適なパラメータ |
| Optimization | 最適化 |
| Optimization problem | 最適化問題 |
| Ordinary least squares | 最小2乗回帰 |
| Output | 出力 |
| Parameter | パラメータ |
| Parameter update | パラメータ更新 |
| Poisson | ポワソン |
| Prediction | 予測 |
| Probability | 確率 |
| Probability distributions of the data | データの確率分布 |
| Probably Approximately Correct (PAC) | 確率的に近似的に正しい (PAC) |
| Random forest | ランダムフォレスト |
| Random variable | ランダムな変数 |
| Randomly selected features | ランダムに選択された特徴量 |
| Recommendation | レコメンデーション |
| Regression | 回帰 |
| Sample mean | 標本平均 |
| Shattering | 細分化 |
| Sigmoid function | シグモイド関数 |
| Softmax regression | ソフトマックス回帰 |
| Spam detection | スパム検知 |
| Stochastic gradient descent | 確率的勾配降下法 |
| Supervised Learning | 教師あり学習 |
| Support Vector Machine (SVM) | サポートベクターマシン |
| Text classification | テキスト分類 |
| To maximize | 最大化する |
| To minimize | 最小化する |
| To predict | 予測する |
| Training data | 学習データ |
| Training error | 学習誤差 |
| Tree-based methods | ツリーベース学習 |
| Union bound | 和集合上界 |
| Update rule | 更新ルール |
| Upper bound theorem | 上界定理 |
| Vapnik-Chervonenkis (VC) dimension | ヴァプニク・チェルヴォーネンキス次元 (VC) |
| Variables | 変数 |
| Variance | 分散 |
| Weights | 重み |
## Unsupervised Learning
| English | 日本語 |
|:-------------------|:-----------------------|
| Agglomerative hierarchical | 凝集階層 |
| Average linkage | 平均リンケージ |
| Bell and Sejnowski ICA algorithm | ベルとシノスキーのICAアルゴリズム |
| Calinski-Harabaz index | Calinski-Harabazインデックス |
| Centroids | 重心 |
| Clustering | クラスタリング |
| Clustering assessment metrics | クラスタリング評価指標 |
| Complete linkage | 完全リンケージ |
| Convergence | 収束 |
| Diagonal | Diagonal |
| Dimension reduction | 次元削減 |
| Dispersion matrices | 分散行列 |
| Distortion function | ひずみ関数 |
| E-step | E-ステップ |
| Eigenvalue | 固有値 |
| Eigenvector | 固有ベクトル |
| Expectation-Maximization | 期待値最大化法 |
| Factor analysis | 因子分析 |
| Gaussians initialization | ガウス分布初期化 |
| Hierarchical clustering | 階層的クラスタリング |
| Independent component analysis (ICA) | 独立成分分析 |
| Jensen's inequality | イェンセンの不等式 |
| K-means clustering | K平均法 |
| Latent variables | 潜在変数 |
| M-step | M-ステップ |
| Means initialization | 平均の初期化 |
| Orthogonal matrix | 実直交行列 |
| Posterior probabilities | 事後確率 |
| Principal components | 主成分 |
| Principal component analysis (PCA) | 主成分分析 |
| Random variables | ランダムな変数 |
| Silhouette coefficient | シルエット係数 |
| Spectral theorem | スペクトル定理 |
| Unmixing matrix | 非混合行列 |
| Unsupervised learning | 教師なし学習 |
| Ward linkage | ウォードリンケージ |
## Probabilities and Statistics
| English | 日本語 |
|:-------------------|:-----------------------|
| Axiom | 公理 |
| Bayes' rule | ベイズの定理 |
| Boundary | 境界 |
| Characteristic function | 特性関数 |
| Chebyshev's inequality | チェビシェフの不等式 |
| Chi-square statistic | カイ二乗統計量 |
| Combinatorics | 組合せ |
| Conditional Probability | 条件付き確率 |
| Continuous | 連続 |
| Cumulative distribution function (CDF) | 累積分布関数 |
| Cumulative function | 累積関数 |
| Discrete | 離散 |
| Distribution | 分布 |
| Event | 事象 |
| Expected value | 期待値 |
| Generalized expected value | 一般化した期待値 |
| Jointly Distributed Random Variables | 同時分布の確率変数 |
| Leibniz integral rule | ライプニッツの積分則 |
| Marginal density | 周辺密度 |
| Mutual information | 相互情報量 |
| Mutually exclusive events | 互いに排反な事象 |
| Order | 順番 |
| Partition | 分割 |
| Pearson correlation coefficient | 相関係数 (ピアソンの積率相関係数) |
| Permutation | 順列 |
| Probability | 確率 |
| Probability density function (PDF) | 確率密度関数 |
| Probability distribution | 確率分布 |
| Random variable | 確率変数 |
| Result | 結果 |
| Sample space | 標本空間 |
| Sequence | 数列 |
| Spearman's rank correlation coefficient | スピアマンの順位相関係数 |
| Standard deviation | 標準偏差 |
| Standard error | 標準誤差 |
| Statistics | 統計 |
| Subset | 部分集合 |
| Type | 種類 |
| Variance | 分散 |
| Weighted mean | 加重平均 |
## Algebra and Calculus
| English | 日本語 |
|:-------------------|:-----------------------|
| Antisymmetric | 反対称 |
| Calculus | 微積分 |
| Column | 列 |
| Column-vector | 列ベクトル |
| Diagonal | 対角成 |
| Element | 要素 |
| Function | 関数 |
| Invertible | 可逆 |
| Linear Algebra | 線形代数 |
| Matrix | 行列 |
| Norm | ノルム |
| Notation | 表記法 |
| Row | 行 |
| Scalar | スカラー |
| Square matrix | 正方行列 |
| Sum | 和 |
| Symmetric | 対称 |
| Symmetric decomposition | 対称分解 |
| Trace | 跡 |
| Vector | ベクトル |
| Vector space | ベクトル空間 |
|
[
"Machine Translation",
"Multilinguality",
"Text Generation"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/yusugomori/jesc_small
|
2019-07-04T16:10:21Z
|
Small Japanese-English Subtitle Corpus
|
yusugomori / jesc_small
Public
Branches
Tags
Go to file
Go to file
Code
READ…
dev.en
dev.ja
test.en
test.ja
train.en
train.ja
Small Japanese-English Subtitle Corpus. Sentences are
extracted from JESC: Japanese-English Subtitle
Corpus, and filtered with the length of 4 to 16 words.
Both Japanese and English sentences are tokenized
with StanfordNLP (v0.2.0).
All texts are encoded in UTF-8. Sentence separator is
'\n' and word separator is ' ' .
Additionally, all tokenized data can be downloaded from
here.
File
#sentences
#words
#vocabulary
train.en
100,000
809,353
29,682
About
Small Japanese-English Subtitle
Corpus
Readme
Activity
3 stars
2 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
jesc_small
Corpus statistics
README
File
#sentences
#words
#vocabulary
train.ja
100,000
808,157
46,471
dev.en
1,000
8,025
1,827
dev.ja
1,000
8,163
2,340
test.en
1,000
8,057
1,805
test.ja
1,000
8,084
2,306
This repo is inspired by small parallel enja
|
# jesc_small
Small Japanese-English Subtitle Corpus. Sentences are extracted from [JESC: Japanese-English Subtitle Corpus](https://nlp.stanford.edu/projects/jesc/index.html), and filtered with the length of 4 to 16 words.
Both Japanese and English sentences are tokenized with [StanfordNLP](https://stanfordnlp.github.io/stanfordnlp/) (v0.2.0).
All texts are encoded in UTF-8. Sentence separator is `'\n'` and word separator is `' '`.
Additionally, all tokenized data can be downloaded from [here](https://drive.google.com/drive/folders/1ldHD6mAJK6Q7vGeu8zk7OXdHPFVcO361?usp=sharing).
## Corpus statistics
| File | #sentences | #words | #vocabulary |
|:---------------|-----------:|---------:|------------:|
| train.en | 100,000 | 809,353 | 29,682 |
| train.ja | 100,000 | 808,157 | 46,471 |
| dev.en | 1,000 | 8,025 | 1,827 |
| dev.ja | 1,000 | 8,163 | 2,340 |
| test.en | 1,000 | 8,057 | 1,805 |
| test.ja | 1,000 | 8,084 | 2,306 |
<br>
This repo is inspired by [small_parallel_enja](https://github.com/odashi/small_parallel_enja).
|
[
"Machine Translation",
"Multilinguality",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/Takeuchi-Lab-LM/python_asa
|
2019-07-10T05:01:43Z
|
python版日本語意味役割付与システム(ASA)
|
Takeuchi-Lab-LM / python_asa
Public
Branches
Tags
Go to file
Go to file
Code
asapy
utils
.gitignore
LICENSE
README.md
setup.py
(English description of the programme follows Japanese)
python版日本語意味役割付与システム(ASA)
Python 3.5 以上
cabochaをpythonで使えるようにしておく
cabochaのダウンロード https://taku910.github.io/cabocha/
git clone {url}
pip install -e python_asa
cd asapy でディレクトリを移動し、 python main.py で起動して標準入力を受け付けます。
About
python版日本語意味役割付与シス
テム(ASA)
Readme
MIT license
Activity
23 stars
1 watching
11 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
3
Languages
Python 99.9%
Shell 0.1%
Code
Issues
1
Pull requests
2
Actions
Projects
Security
Insights
python_asa
動作環境
インストール
使用方法
README
MIT license
2020/1/12 バグの修正.辞書構造をJsonに変更.moodなど動詞ムードなどの判定変更.semroleの場所
判定の修正.願望(DESIDERATIVE)を追加.
ASA extracts predicate-argument structures in Japanese sentences based on the predicate frame file
Predicate Thesaurus (PT), which is constructed under Project of Constructing Japanese Thesaurus of
Predicate-Argument Structure. Thus, ASA may fail to detect some predicate-argument structures in the cases
which the predicates are not registered in PT. Although PT currently contains more than about 10,000
predicates, there may remains some bugs since the thesaurus is under construction.
ASA detects a set of predicate-arguments based on chunk dependency. Since the dependency unit employed
in this analyzer is not phrasal based, you need to follow the dependency links if a full phrase is necessary.
When you input a Japanese sentence, 「太郎の本を健が捨てた」Ken threw Taro's book away, for instance,
ASA punctuates it into several chunks, as shown below. The square brackets [ ] signify the boundaries of
a chunk. The chunk IDs and head chunk IDs are also assigned in the brackets.
[0 1 Taro-no] [1 3 hon-wo] [2 3 Ken-ga] [3 -1 suteta]
[0 1 Taro-ɢᴇɴ] [1 3 book-ᴀᴄᴄ] [2 3 Ken-ɴᴏᴍ] [3 -1 throw.away]
ASA returns the predicate-arguments of the sentence as:
Here, the dative chunk [太郎の] in the noun phrase 太郎の本 is ignored. In order to get a complete phrase of
arg1 , i.e., [arg1 太郎の本を] , you have to follow the chunk dependency links and extract the chunk [太郎
の].
Accuracy of semantic role labeling is about 60% evaluated on BCCWJ-PT corpus. The accuracy is not high
enough, as the system utilizes a simple, rule-based approach.
Input sentence: 太郎は6時に次郎を追いかけた。
Output: ['追いかける', ['太郎は', '対象'], ['6時に', '場所(時)(点)'], ['次郎を', '']]
変更点
Argument Structure Analyzer (ASA) and ASA
python
[0 1 太郎の] [1 3 本を] [2 3 健が] [3 -1 捨てた]
[arg0 健が] [arg1 本を] [v 捨てた]
Accuracy of detecting semantic role labels
How to get a set of predicate-arguments
getpas.py
from ASA import ASA
def out_predarg(result_chunks):
for chunk in result_chunks:
if chunk.get('semantic') != None:
# get a predicate-argument
out_st = []
out_st.append(chunk['main'])
record_arg_chunk_ids = []
for arg_chunk in chunk['frames']:
arg_chunk_id = arg_chunk['id']
if arg_chunk_id in record_arg_chunk_ids:
continue
arg_surface = result_chunks[arg_chunk_id]['surface']
arg_role = arg_chunk['semrole']
#arg_nrole = result_chunks[arg_chunk_id]['surface']
out_st.append([arg_surface,arg_role])
record_arg_chunk_ids.append(arg_chunk_id)
print(out_st)
if __name__ == '__main__':
asa = ASA()
sentences = ["太郎は6時に次郎を追いかけた", "昨日,太郎に会った", "太郎が買った本を売った"]
for sent in sentences:
asa.parse(sent) #Class resut.Result
result json = asa.dumpJson()
|
(English description of the programme follows Japanese)
# `python_asa`
python版日本語意味役割付与システム(ASA)
## 動作環境
- Python 3.5 以上
- cabochaをpythonで使えるようにしておく
- cabochaのダウンロード https://taku910.github.io/cabocha/
## インストール
```git clone {url} ```
```pip install -e python_asa```
## 使用方法
```cd asapy```でディレクトリを移動し、
```python main.py``` で起動して標準入力を受け付けます。
## 変更点
- 2020/1/12 バグの修正.辞書構造をJsonに変更.moodなど動詞ムードなどの判定変更.semroleの場所判定の修正.願望(DESIDERATIVE)を追加.
# Argument Structure Analyzer (ASA) and ASA python
ASA extracts predicate-argument structures in Japanese sentences based on the predicate frame file [Predicate Thesaurus (PT)](http://pth.cl.cs.okayama-u.ac.jp/testp/pth/Vths), which is constructed under Project of Constructing Japanese Thesaurus of Predicate-Argument Structure. Thus, ASA may fail to detect some predicate-argument structures in the cases which the predicates are not registered in PT. Although PT currently contains more than about 10,000 predicates, there may remains some bugs since the thesaurus is under construction.
ASA detects a set of predicate-arguments based on chunk dependency. Since the dependency unit employed in this analyzer is not phrasal based, you need to follow the dependency links if a full phrase is necessary.
When you input a Japanese sentence, 「太郎の本を健が捨てた」*Ken threw Taro's book away*, for instance,
ASA punctuates it into several chunks, as shown below.
The square brackets `[ ]` signify the boundaries of a chunk. The chunk IDs and head chunk IDs are also assigned in the brackets.
```
[0 1 太郎の] [1 3 本を] [2 3 健が] [3 -1 捨てた]
```
>[0 1 Taro-no] [1 3 hon-wo] [2 3 Ken-ga] [3 -1 suteta]
>
>[0 1 Taro-ɢᴇɴ] [1 3 book-ᴀᴄᴄ] [2 3 Ken-ɴᴏᴍ] [3 -1 throw.away]
ASA returns the predicate-arguments of the sentence as:
```
[arg0 健が] [arg1 本を] [v 捨てた]
```
Here, the dative chunk [太郎の] in the noun phrase 太郎の本 is ignored. In order to get a complete phrase of `arg1`, i.e.,
`[arg1 太郎の本を]`, you have to follow the chunk dependency links and extract the chunk [太郎の].
## Accuracy of detecting semantic role labels
Accuracy of semantic role labeling is about 60% evaluated on [BCCWJ-PT corpus](http://pth.cl.cs.okayama-u.ac.jp/). The accuracy is not high enough, as the system utilizes a simple, rule-based approach.
## How to get a set of predicate-arguments
> Input sentence: 太郎は6時に次郎を追いかけた。
> Output: `['追いかける', ['太郎は', '対象'], ['6時に', '場所(時)(点)'], ['次郎を', '']]`
### getpas.py
```python
from ASA import ASA
def out_predarg(result_chunks):
for chunk in result_chunks:
if chunk.get('semantic') != None:
# get a predicate-argument
out_st = []
out_st.append(chunk['main'])
record_arg_chunk_ids = []
for arg_chunk in chunk['frames']:
arg_chunk_id = arg_chunk['id']
if arg_chunk_id in record_arg_chunk_ids:
continue
arg_surface = result_chunks[arg_chunk_id]['surface']
arg_role = arg_chunk['semrole']
#arg_nrole = result_chunks[arg_chunk_id]['surface']
out_st.append([arg_surface,arg_role])
record_arg_chunk_ids.append(arg_chunk_id)
print(out_st)
if __name__ == '__main__':
asa = ASA()
sentences = ["太郎は6時に次郎を追いかけた", "昨日,太郎に会った", "太郎が買った本を売った"]
for sent in sentences:
asa.parse(sent) #Class resut.Result
result_json = asa.dumpJson()
print (result_json['surface']) # input sentence
result_chunks = result_json['chunks']
out_predarg(result_chunks)
```
|
[
"Semantic Parsing",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/DayuanJiang/giant_ja-en_parallel_corpus
|
2019-08-04T03:01:03Z
|
This directory includes a giant Japanese-English subtitle corpus. The raw data comes from the Stanford’s JESC project.
|
DayuanJiang / giant_ja-en_parallel_corpus
Public
Branches
Tags
Go to file
Go to file
Code
READ…
en_ja_…
prepro…
This directory includes a giant Japanese-English subtitle
corpus. The raw data comes from the Stanford’s JESC
project.
About
This directory includes a giant
Japanese-English subtitle corpus.
The raw data comes from the
Stanford’s JESC project.
Readme
Activity
6 stars
2 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Giant_ja-
en_parallel_corpus: 2.8M
Ja/En Subtitle Corpus
Data Example
# test.ja
顔面 パンチ かい ?
お姉ちゃん 、 何で ?
もしくは 実際 の 私 の 要求 を 満たす こと
も かのう でしょ う 。
分かっ た 、 リジー 。
夫 を 自分 で 、 けがす こと に なり ます 。
あの 、 それ くらい に 、 し て おい て くれ
ない ?
お 掛け 下さい 。
# test.en
so face punch , huh ?
lisa , no !
or you could actually meet my need .
me ! ok , lizzy .
README
A large corpus consisting of 2.8 million sentences.
Translations of casual language, colloquialisms,
expository writing, and narrative discourse. These
are domains that are hard to find in JA-EN MT.
Several pre-processing has been done to make the
dataset easier to use.
Overall:
Delete the pair that Japanese phrase only have
only one word.
The data has been split into train/dev/test set with
following size
train: 2,795,067 phrase pairs
dev: 2,800 phrase pairs
test: 2,800 phrase pairs
For English text:
Add ‘.’ to the end of English phrase if it do not end
with punctuation.
Tokenize text with `nltk.
For Japanese text:
Add ‘。’ to the end of Japanese phrase if it do not
end with punctuation.
Replace space inside the phrase with ‘、’.
Tokenize text with tokenizer Mecab and dictionary
mecab-ipadic-neologd .
my husband would defile himself .
hey , can you leave it at that ?
we can sit in here .
Contents
Modifications
|
# Giant_ja-en_parallel_corpus: 2.8M Ja/En Subtitle Corpus
This directory includes a giant Japanese-English subtitle corpus. The raw data comes from the Stanford’s [JESC](https://nlp.stanford.edu/projects/jesc/) project.
## Data Example
```
# test.ja
顔面 パンチ かい ?
お姉ちゃん 、 何で ?
もしくは 実際 の 私 の 要求 を 満たす こと も かのう でしょ う 。
分かっ た 、 リジー 。
夫 を 自分 で 、 けがす こと に なり ます 。
あの 、 それ くらい に 、 し て おい て くれ ない ?
お 掛け 下さい 。
```
```
# test.en
so face punch , huh ?
lisa , no !
or you could actually meet my need .
me ! ok , lizzy .
my husband would defile himself .
hey , can you leave it at that ?
we can sit in here .
```
## Contents
- A large corpus consisting of 2.8 million sentences.
- Translations of casual language, colloquialisms, expository writing, and narrative discourse. These are domains that are hard to find in JA-EN MT.
## Modifications
Several pre-processing has been done to make the dataset easier to use.
Overall:
- Delete the pair that Japanese phrase only have only one word.
- The data has been split into train/dev/test set with following size
- train: 2,795,067 phrase pairs
- dev: 2,800 phrase pairs
- test: 2,800 phrase pairs
For English text:
- Add ‘.’ to the end of English phrase if it do not end with punctuation.
- Tokenize text with `nltk.
For Japanese text:
- Add ‘。’ to the end of Japanese phrase if it do not end with punctuation.
- Replace space inside the phrase with ‘、’.
- Tokenize text with tokenizer `Mecab` and dictionary `mecab-ipadic-neologd`.
|
[
"Machine Translation",
"Multilinguality"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/kmizu/sftly-replace
|
2019-08-20T15:11:46Z
|
A Chrome extention to replace the selected text softly
|
kmizu / softly-replace
Public
Branches
Tags
Go to file
Go to file
Code
LICEN…
READ…
backgr…
conten…
manife…
options…
options.js
選択された範囲のテキストを「柔らかく言い換える」
Chrome拡張です。
とりあえず手元で動いたという程度です。
使ってみたい方はリポジトリをcloneした上で、Chrome
の「パッケージ化されていない拡張機能を読み込む」か
らディレクトリごと読み込んでください。
OpenAIのいわゆるChatGPT APIを使っています。
拡張機能の設定ページからOpenAIのAPIキーを設定して
ください。APIキーはサーバに保存されたりしないので
安心してください。
About
A Chrome extention to replace the
selected text softly
Readme
MIT license
Activity
4 stars
2 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
JavaScript 87.8%
HTML 12.2%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
softly-replace
README
MIT license
スクリーンショット
|
# softly-replace
選択された範囲のテキストを「柔らかく言い換える」Chrome拡張です。
とりあえず手元で動いたという程度です。
使ってみたい方はリポジトリをcloneした上で、Chromeの「パッケージ化されていない拡張機能を読み込む」からディレクトリごと読み込んでください。
OpenAIのいわゆるChatGPT APIを使っています。
拡張機能の設定ページからOpenAIのAPIキーを設定してください。APIキーはサーバに保存されたりしないので安心してください。
## スクリーンショット



|
[
"Natural Language Interfaces",
"Paraphrasing",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/chakki-works/CoARiJ
|
2019-09-02T05:12:47Z
|
Corpus of Annual Reports in Japan
|
chakki-works / CoARiJ
Public
Branches
Tags
Go to file
Go to file
Code
coarij
releases
tests
.gitignore
.travis.yml
LICENSE
README.md
requirements-test.txt
requirements.txt
setup.py
build
build unknown
unknown
codecov
codecov
83%
83%
We organized Japanese financial reports to encourage applying NLP techniques to financial analytics.
The corpora are separated to each financial years.
master version.
fiscal_year
Raw file version (F)
Text extracted version (E)
2014
.zip (9.3GB)
.zip (269.9MB)
2015
.zip (9.8GB)
.zip (291.1MB)
2016
.zip (10.2GB)
.zip (334.7MB)
2017
.zip (9.1GB)
.zip (309.4MB)
2018
.zip (10.5GB)
.zip (260.9MB)
financial data is from 決算短信情報.
We use non-cosolidated data if it exist.
stock data is from 月間相場表(内国株式).
close is fiscal period end and open is 1 year before of it.
v1.0
About
Corpus of Annual Reports in Japan
# finance # natural-language-processing
# corpus # dataset
Readme
MIT license
Activity
Custom properties
86 stars
11 watching
7 forks
Report repository
Releases
4 tags
Packages
No packages published
Languages
Python 100.0%
Code
Issues
4
Pull requests
Actions
Projects
Wiki
Security
Insights
CoARiJ: Corpus of Annual Reports in Japan
Dataset
Past release
Statistics
README
MIT license
fiscal_year
number_of_reports
has_csr_reports
has_financial_data
has_stock_data
2014
3,724
92
3,583
3,595
2015
3,870
96
3,725
3,751
2016
4,066
97
3,924
3,941
2017
3,578
89
3,441
3,472
2018
3,513
70
2,893
3,413
The structure of dataset is following.
docs includes XBRL and PDF file.
XBRL file of annual reports (files are retrieved from EDINET).
PDF file of CSR reports (additional content).
documents.csv has metadata like following. Please refer the detail at Wiki.
edinet_code: E0000X
filer_name: XXX株式会社
fiscal_year: 201X
fiscal_period: FY
doc_path: docs/S000000X.xbrl
csr_path: docs/E0000X_201X_JP_36.pdf
Text extracted version includes txt files that match each part of an annual report.
The extracted parts are defined at xbrr .
You can download dataset by command line tool.
Please refer the usage by -- (using fire).
Example command.
File structure
Raw file version ( --kind F )
chakki_esg_financial_{year}.zip
└──{year}
├── documents.csv
└── docs/
Text extracted version ( --kind E )
chakki_esg_financial_{year}_extracted.zip
└──{year}
├── documents.csv
└── docs/
Tool
pip install coarij
coarij --
# Download raw file version dataset of 2014.
coarij download --kind F --year 2014
# Extract business.overview_of_result part of TIS.Inc (sec code=3626).
coarij extract business.overview_of_result --sec_code 3626
# Tokenize text by Janome (`janome` or `sudachi` is supported).
If you want to download latest dataset, please specify --version master when download the data.
About the parsable part, please refer the xbrr .
You can use Ledger to select your necessary file from overall CoARiJ dataset.
pip install janome
coarij tokenize --tokenizer janome
# Show tokenized result (words are separated by \t).
head -n 5 data/processed/2014/docs/S100552V_business_overview_of_result_tokenized.txt
1 【 業績 等 の 概要 】
( 1 ) 業績
当 連結 会計 年度 における 我が国 経済 は 、 消費 税率 引上げ に 伴う 駆け込み行見
from coarij.storage import Storage
storage = Storage("your/data/directory")
ledger = storage.get_ledger()
collected = ledger.collect(edinet_code="E00021")
|
# CoARiJ: Corpus of Annual Reports in Japan
[](https://badge.fury.io/py/coarij)
[](https://travis-ci.org/chakki-works/coarij)
[](https://codecov.io/gh/chakki-works/coarij)
We organized Japanese financial reports to encourage applying NLP techniques to financial analytics.
## Dataset
The corpora are separated to each financial years.
master version.
| fiscal_year | Raw file version (F) | Text extracted version (E) |
|-------------|-------------------|-----------------|
| 2014 | [.zip (9.3GB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_2014.zip) | [.zip (269.9MB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_extracted_2014.zip) |
| 2015 | [.zip (9.8GB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_2015.zip) | [.zip (291.1MB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_extracted_2015.zip) |
| 2016 | [.zip (10.2GB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_2016.zip) | [.zip (334.7MB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_extracted_2016.zip) |
| 2017 | [.zip (9.1GB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_2017.zip) | [.zip (309.4MB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_extracted_2017.zip) |
| 2018 | [.zip (10.5GB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_2018.zip) | [.zip (260.9MB)](https://s3-ap-northeast-1.amazonaws.com/chakki.esg.financial.jp/dataset/release/chakki_esg_financial_extracted_2018.zip) |
* financial data is from [決算短信情報](http://db-ec.jpx.co.jp/category/C027/).
* We use non-cosolidated data if it exist.
* stock data is from [月間相場表(内国株式)](http://db-ec.jpx.co.jp/category/C021/STAT1002.html).
* `close` is fiscal period end and `open` is 1 year before of it.
### Past release
* [v1.0](https://github.com/chakki-works/CoARiJ/blob/master/releases/v1.0.md)
### Statistics
| fiscal_year | number_of_reports | has_csr_reports | has_financial_data | has_stock_data |
|-------------|-------------------|-----------------|--------------------|----------------|
| 2014 | 3,724 | 92 | 3,583 | 3,595 |
| 2015 | 3,870 | 96 | 3,725 | 3,751 |
| 2016 | 4,066 | 97 | 3,924 | 3,941 |
| 2017 | 3,578 | 89 | 3,441 | 3,472 |
| 2018 | 3,513 | 70 | 2,893 | 3,413 |
### File structure
#### Raw file version (`--kind F`)
The structure of dataset is following.
```
chakki_esg_financial_{year}.zip
└──{year}
├── documents.csv
└── docs/
```
`docs` includes XBRL and PDF file.
* XBRL file of annual reports (files are retrieved from [EDINET](http://disclosure.edinet-fsa.go.jp/)).
* PDF file of CSR reports (additional content).
`documents.csv` has metadata like following. Please refer the detail at [Wiki](https://github.com/chakki-works/CoARiJ/wiki/Columns-on-the-file).
* edinet_code: `E0000X`
* filer_name: `XXX株式会社`
* fiscal_year: `201X`
* fiscal_period: `FY`
* doc_path: `docs/S000000X.xbrl`
* csr_path: `docs/E0000X_201X_JP_36.pdf`
#### Text extracted version (`--kind E`)
Text extracted version includes `txt` files that match each part of an annual report.
The extracted parts are defined at [`xbrr`](https://github.com/chakki-works/xbrr/blob/master/docs/edinet.md).
```
chakki_esg_financial_{year}_extracted.zip
└──{year}
├── documents.csv
└── docs/
```
## Tool
You can download dataset by command line tool.
```
pip install coarij
```
Please refer the usage by `--` (using [fire](https://github.com/google/python-fire)).
```
coarij --
```
Example command.
```bash
# Download raw file version dataset of 2014.
coarij download --kind F --year 2014
# Extract business.overview_of_result part of TIS.Inc (sec code=3626).
coarij extract business.overview_of_result --sec_code 3626
# Tokenize text by Janome (`janome` or `sudachi` is supported).
pip install janome
coarij tokenize --tokenizer janome
# Show tokenized result (words are separated by \t).
head -n 5 data/processed/2014/docs/S100552V_business_overview_of_result_tokenized.txt
1 【 業績 等 の 概要 】
( 1 ) 業績
当 連結 会計 年度 における 我が国 経済 は 、 消費 税率 引上げ に 伴う 駆け込み 需要 の 反動 や 海外 景気 動向 に対する 先行き 懸念 等 から 弱い 動き も 見 られ まし た が 、 企業 収益 の 改善 等 により 全体 ...
```
If you want to download latest dataset, please specify `--version master` when download the data.
* About the parsable part, please refer the [`xbrr`](https://github.com/chakki-works/xbrr/blob/master/docs/edinet.md).
You can use `Ledger` to select your necessary file from overall CoARiJ dataset.
```python
from coarij.storage import Storage
storage = Storage("your/data/directory")
ledger = storage.get_ledger()
collected = ledger.collect(edinet_code="E00021")
```
|
[] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/cl-tohoku/elmo-japanese
|
2019-10-01T03:16:13Z
|
elmo-japanese
|
cl-tohoku / elmo-japanese
Public
Branches
Tags
Go to file
Go to file
Code
data
scripts
src
READ…
Tensorflow implementation of bidirectional language
models (biLM) used to compute ELMo representations
from "Deep contextualized word representations".
This codebase is based on bilm-tf and deals with
Japanese.
This repository supports both training biLMs and using
pre-trained models for prediction.
CPU
GPU
About
No description, website, or topics
provided.
Readme
Activity
Custom properties
5 stars
6 watching
1 fork
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 99.8%
Shell 0.2%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
elmo-japanese
Installation
conda create -n elmo-jp python=3.6
anaconda
source activate elmo-jp
pip install tensorflow==1.10 h5py
git clone https://github.com/cl-
tohoku/elmo-japanese.git
README
Training ELMo
Computing representations from the trained biLM
The following command outputs the ELMo
representations (elmo.hdf5) for the text
(sample.jp.wakati.txt) in the checkpoint directory
(save_dir).
The following command prints out the information of the
elmo.hdf5, such as the number of sentences, words and
dimensions.
conda create -n elmo-jp python=3.6
anaconda
source activate elmo-jp
pip install tensorflow-gpu==1.10 h5py
git clone https://github.com/cl-
tohoku/elmo-japanese.git
Getting started
python src/run_train.py \
--option_file data/config.json \
--save_dir checkpoint \
--word_file
data/vocab.sample.jp.wakati.txt \
--char_file
data/vocab.sample.jp.space.txt \
--train_prefix
data/sample.jp.wakati.txt
python src/run_elmo.py \
--option_file
checkpoint/options.json \
--weight_file checkpoint/weight.hdf5
\
--word_file
data/vocab.sample.jp.wakati.txt \
--char_file
data/vocab.sample.jp.space.txt \
--data_file
data/sample.jp.wakati.txt \
--output_file elmo.hdf5
python scripts/view_hdf5.py elmo.hdf5
Save sentence-level ELMo representations
View sentence similarities
Making a token vocab file
Making a character vocab file
Computing sentence
representations
python src/run_elmo.py \
--option_file
checkpoint/options.json \
--weight_file checkpoint/weight.hdf5
\
--word_file
data/vocab.sample.jp.wakati.txt \
--char_file
data/vocab.sample.jp.space.txt \
--data_file
data/sample.jp.wakati.txt \
--output_file elmo.hdf5 \
--sent_vec
python scripts/view_sent_sim.py \
--data data/sample.jp.wakati.txt \
--elmo elmo.hdf5
Training ELMo on a new corpus
python scripts/make_vocab_file.py \
--input_fn data/sample.jp.wakati.txt
\
--output_fn
data/vocab.sample.jp.wakati.txt
python scripts/space_split.py \
--input_fn data/sample.jp.wakati.txt
\
--output_fn data/sample.jp.space.txt
python scripts/make_vocab.py \
--input_fn data/sample.jp.space.txt
\
--output_fn
data/vocab.sample.jp.space.txt
Training ELMo
Retraining the trained ELMo
Computing token representations from the ELMo
Download: checkpoint, vocab tokens, vocab
characters
Computing sentence representations
python src/run_train.py \\
--train_prefix
data/sample.jp.wakati.txt \
--word_file
data/vocab.sample.jp.wakati.txt \
--char_file
data/vocab.sample.jp.space.txt \
--config_file data/config.json
--save_dir checkpoint
python src/run_train.py \
--train_prefix
data/sample.jp.wakati.txt \
--word_file
data/vocab.sample.jp.wakati.txt \
--char_file
data/vocab.sample.jp.space.txt \
--save_dir checkpoint \
--restart
python src/run_elmo.py \
--test_prefix
data/sample.jp.wakati.txt \
--word_file
data/vocab.sample.jp.wakati.txt \
--char_file
data/vocab.sample.jp.space.txt \
--save_dir checkpoint
Using the ELMo trained on
Wikipedia
python src/run_elmo.py \
--option_file data/checkpoint_wiki-
wakati-cleaned_token-10_epoch-
10/options.json \
--weight_file data/checkpoint_wiki-
wakati-cleaned_token-10_epoch-
Retraining the pre-trained ELMo on your corpus
Making a dataset for text classification
Computing sentence representations
10/weight.hdf5 \
--word_file
data/vocab.token.wiki_wakati.cleaned.min-
10.txt \
--char_file
data/vocab.char.wiki_wakati.cleaned.min-
0.txt \
--data_file
data/sample.jp.wakati.txt \
--output_file elmo.hdf5 \
--sent_vec
python src/run_train.py \
--train_prefix PATH_TO_YOUR_CORPUS \
--word_file
data/vocab.token.wiki_wakati.cleaned.min-
10.txt \
--char_file
data/vocab.char.wiki_wakati.cleaned.min-
0.txt \
--save_dir checkpoint_wiki-wakati-
cleaned_token-10_epoch-10 \
--restart
Checking performance in text
classification
cd data
./make_data.sh
python src/run_elmo.py \
--option_file data/checkpoint_wiki-
wakati-cleaned_token-10_epoch-
10/options.json \
--weight_file data/checkpoint_wiki-
wakati-cleaned_token-10_epoch-
10/weight.hdf5 \
--word_file
data/vocab.token.wiki_wakati.cleaned.min-
10.txt \
--char_file
data/vocab.char.wiki_wakati.cleaned.min-
0.txt \
--data_file data/dataset.wakati.txt
Predicting nearest neighbors
\
--output_file elmo.hdf5 \
--sent_vec
python src/knn.py \
--data data/dataset.wakati-label.txt
\
--elmo elmo.hdf5
LICENCE
|
# elmo-japanese
Tensorflow implementation of bidirectional language models (biLM) used to compute ELMo representations
from ["Deep contextualized word representations"](http://arxiv.org/abs/1802.05365).
This codebase is based on [bilm-tf](https://github.com/allenai/bilm-tf) and deals with Japanese.
This repository supports both training biLMs and using pre-trained models for prediction.
## Installation
- CPU
```
conda create -n elmo-jp python=3.6 anaconda
source activate elmo-jp
pip install tensorflow==1.10 h5py
git clone https://github.com/cl-tohoku/elmo-japanese.git
```
- GPU
```
conda create -n elmo-jp python=3.6 anaconda
source activate elmo-jp
pip install tensorflow-gpu==1.10 h5py
git clone https://github.com/cl-tohoku/elmo-japanese.git
```
## Getting started
- Training ELMo
```
python src/run_train.py \
--option_file data/config.json \
--save_dir checkpoint \
--word_file data/vocab.sample.jp.wakati.txt \
--char_file data/vocab.sample.jp.space.txt \
--train_prefix data/sample.jp.wakati.txt
```
- Computing representations from the trained biLM
The following command outputs the ELMo representations (elmo.hdf5) for the text (sample.jp.wakati.txt) in the checkpoint directory (save_dir).
```
python src/run_elmo.py \
--option_file checkpoint/options.json \
--weight_file checkpoint/weight.hdf5 \
--word_file data/vocab.sample.jp.wakati.txt \
--char_file data/vocab.sample.jp.space.txt \
--data_file data/sample.jp.wakati.txt \
--output_file elmo.hdf5
```
The following command prints out the information of the elmo.hdf5, such as the number of sentences, words and dimensions.
```
python scripts/view_hdf5.py elmo.hdf5
```
## Computing sentence representations
- Save sentence-level ELMo representations
```
python src/run_elmo.py \
--option_file checkpoint/options.json \
--weight_file checkpoint/weight.hdf5 \
--word_file data/vocab.sample.jp.wakati.txt \
--char_file data/vocab.sample.jp.space.txt \
--data_file data/sample.jp.wakati.txt \
--output_file elmo.hdf5 \
--sent_vec
```
- View sentence similarities
```
python scripts/view_sent_sim.py \
--data data/sample.jp.wakati.txt \
--elmo elmo.hdf5
```
## Training ELMo on a new corpus
- Making a token vocab file
```
python scripts/make_vocab_file.py \
--input_fn data/sample.jp.wakati.txt \
--output_fn data/vocab.sample.jp.wakati.txt
```
- Making a character vocab file
```
python scripts/space_split.py \
--input_fn data/sample.jp.wakati.txt \
--output_fn data/sample.jp.space.txt
```
```
python scripts/make_vocab.py \
--input_fn data/sample.jp.space.txt \
--output_fn data/vocab.sample.jp.space.txt
```
- Training ELMo
```
python src/run_train.py \\
--train_prefix data/sample.jp.wakati.txt \
--word_file data/vocab.sample.jp.wakati.txt \
--char_file data/vocab.sample.jp.space.txt \
--config_file data/config.json
--save_dir checkpoint
```
- Retraining the trained ELMo
```
python src/run_train.py \
--train_prefix data/sample.jp.wakati.txt \
--word_file data/vocab.sample.jp.wakati.txt \
--char_file data/vocab.sample.jp.space.txt \
--save_dir checkpoint \
--restart
```
- Computing token representations from the ELMo
```
python src/run_elmo.py \
--test_prefix data/sample.jp.wakati.txt \
--word_file data/vocab.sample.jp.wakati.txt \
--char_file data/vocab.sample.jp.space.txt \
--save_dir checkpoint
```
## Using the ELMo trained on Wikipedia
- Download: [checkpoint](https://drive.google.com/open?id=11tsu7cXV6KRS8aYnxoquEQ0xOp_i9mfa), [vocab tokens](https://drive.google.com/open?id=193JOeZcU6nSpGjJH9IP4Qn_UWiJXtRnp), [vocab characters](https://drive.google.com/open?id=15D8F3XRCm3oEdLBbl978KaJG_AAJDW4v)
- Computing sentence representations
```
python src/run_elmo.py \
--option_file data/checkpoint_wiki-wakati-cleaned_token-10_epoch-10/options.json \
--weight_file data/checkpoint_wiki-wakati-cleaned_token-10_epoch-10/weight.hdf5 \
--word_file data/vocab.token.wiki_wakati.cleaned.min-10.txt \
--char_file data/vocab.char.wiki_wakati.cleaned.min-0.txt \
--data_file data/sample.jp.wakati.txt \
--output_file elmo.hdf5 \
--sent_vec
```
- Retraining the pre-trained ELMo on your corpus
```
python src/run_train.py \
--train_prefix PATH_TO_YOUR_CORPUS \
--word_file data/vocab.token.wiki_wakati.cleaned.min-10.txt \
--char_file data/vocab.char.wiki_wakati.cleaned.min-0.txt \
--save_dir checkpoint_wiki-wakati-cleaned_token-10_epoch-10 \
--restart
```
## Checking performance in text classification
- Making a dataset for text classification
```
cd data
./make_data.sh
```
- Computing sentence representations
```
python src/run_elmo.py \
--option_file data/checkpoint_wiki-wakati-cleaned_token-10_epoch-10/options.json \
--weight_file data/checkpoint_wiki-wakati-cleaned_token-10_epoch-10/weight.hdf5 \
--word_file data/vocab.token.wiki_wakati.cleaned.min-10.txt \
--char_file data/vocab.char.wiki_wakati.cleaned.min-0.txt \
--data_file data/dataset.wakati.txt \
--output_file elmo.hdf5 \
--sent_vec
```
- Predicting nearest neighbors
```
python src/knn.py \
--data data/dataset.wakati-label.txt \
--elmo elmo.hdf5
```
## LICENCE
MIT Licence
|
[
"Language Models",
"Representation Learning",
"Semantic Text Processing"
] |
[] |
true |
https://github.com/ku-nlp/KWDLC
|
2019-11-06T04:07:47Z
|
Kyoto University Web Document Leads Corpus
|
ku-nlp / KWDLC
Public
Branches
Tags
Go to file
Go to file
Code
.github
disc
doc
id
knp
org
scripts
.gitignore
.pre-commit-config.yaml
README.md
poetry.lock
pyproject.toml
requirements.txt
This is a Japanese text corpus that consists of lead three sentences of web documents with various linguistic annotations. By collecting lead
three sentences of web documents, this corpus contains documents with various genres and styles, such as news articles, encyclopedic
articles, blogs and commercial pages. It comprises approximately 5,000 documents, which correspond to 15,000 sentences.
The linguistic annotations consist of annotations of morphology, named entities, dependencies, predicate-argument structures including zero
anaphora, coreferences, and discourse. All the annotations except discourse annotations were given by manually modifying automatic
analyses of the morphological analyzer JUMAN and the dependency, case structure and anaphora analyzer KNP. The discourse annotations
were given by two types of annotators; experts and crowd workers.
This corpus consists of linguistically annotated Web documents that have been made publicly available on the Web at some time. The corpus
is released for the purpose of contributing to the research of natural language processing.
Since the collected documents are fragmentary, i.e., only the lead three sentences of each Web document, we have not obtained permission
from copyright owners of the Web documents and do not provide source information such as URL. If copyright owners of Web documents
request the addition of source information or deletion of these documents, we will update the corpus and newly release it. In this case, please
delete the downloaded old version and replace it with the new version.
About
Kyoto University Web Document
Leads Corpus
# japanese # corpus # named-entities
# part-of-speech # morphological-analysis
# dependency-parsing
Readme
Activity
Custom properties
77 stars
8 watching
7 forks
Report repository
Releases 3
v1.1.1
Latest
on Dec 18, 2023
+ 2 releases
Packages
No packages published
Contributors
7
Languages
Python 100.0%
Code
Issues
12
Pull requests
Actions
Projects
Wiki
Security
Insights
Kyoto University Web Document Leads Corpus
Overview
Notes
README
The annotation guidelines for this corpus are written in the manuals found in the "doc" directory. The guidelines for morphology and
dependencies are described in syn_guideline.pdf, those for predicate-argument structures and coreferences are described in rel_guideline.pdf,
and those for discourse relations are described in disc_guideline.pdf. The guidelines for named entities are available on the IREX website
(http://nlp.cs.nyu.edu/irex/).
knp/ : the corpus annotated with morphology, named entities, dependencies, predicate-argument structures, and coreferences
disc/ : the corpus annotated with discourse relations
org/ : the raw corpus
doc/ : annotation guidelines
id/ : document id files providing train/test split
# of
documents
# of
sentences
# of
morphemes
# of named
entities
# of
predicates
# of coreferring
mentions
train
3,915
11,745
194,490
6,267
51,702
16,079
dev
512
1,536
22,625
974
6,139
1,641
test
700
2,100
35,869
1,122
9,549
3,074
total
5,127
15,381
252,984
8,363
67,390
20,794
Annotations of this corpus are given in the following format.
The first line represents the ID of this sentence. In the subsequent lines, the lines starting with "*" denote "bunsetsu," the lines starting with "+"
denote basic phrases, and the other lines denote morphemes.
The line of morphemes is the same as the output of the morphological analyzers, JUMAN and Juman++. This information includes surface
string, reading, lemma, part of speech (POS), fine-grained POS, conjugate type, and conjugate form. "*" means that its field is not available.
Note that this format is slightly different from KWDLC 1.0, which adopted the same format as Kyoto University Text Corpus 4.0.
The line starting with "*" represents "bunsetsu," which is a conventional unit for dependency in Japanese. "Bunsetsu" consists of one or more
content words and zero or more function words. In this line, the first numeral means the ID of its depending head. The subsequent alphabet
denotes the type of dependency relation, i.e., "D" (normal dependency), "P" (coordination dependency), "I" (incomplete coordination
dependency), and "A" (appositive dependency).
The line starting with "+" represents a basic phrase, which is a unit to which various relations are annotated. A basic phrase consists of one
content word and zero or more function words. Therefore, it is equivalent to a bunsetsu or a part of a bunsetsu. In this line, the first numeral
means the ID of its depending head. The subsequent alphabet is defined in the same way as bunsetsu. The remaining part of this line includes
the annotations of named entity and various relations.
Notes on annotation guidelines
Distributed files
Statistics
Format of the corpus annotated with annotations of morphology, named entities,
dependencies, predicate-argument structures, and coreferences
# S-ID:w201106-0000010001-1
* 2D
+ 3D
太郎 たろう 太郎 名詞 6 人名 5 * 0 * 0
は は は 助詞 9 副助詞 2 * 0 * 0
* 2D
+ 2D
京都 きょうと 京都 名詞 6 地名 4 * 0 * 0
+ 3D <NE:ORGANIZATION:京都大学>
大学 だいがく 大学 名詞 6 普通名詞 1 * 0 * 0
に に に 助詞 9 格助詞 1 * 0 * 0
* -1D
+ -1D <rel type="ガ" target="太郎" sid="w201106-0000010001-1" id="0"/><rel type="ニ" target="大学" sid="w201106-
0000010001-1" id="2"/>
行った いった 行く 動詞 2 * 0 子音動詞カ行促音便形 3 タ形 10
EOS
Annotations of named entity are given in <NE> tags. <NE> has the following four attributes: type, target, possibility, and optional_type, which
mean the class of a named entity, the string of a named entity, possible classes for an OPTIONAL named entity, and a type for an OPTIONAL
named entity, respectively. The details of these attributes are described in the IREX annotation guidelines.
Annotations of various relations are given in <rel> tags. <rel> has the following four attributes: type, target, sid, and id, which mean the
name of a relation, the string of the counterpart, the sentence ID of the counterpart, and the basic phrase ID of the counterpart, respectively. If
a basic phrase has multiple tags of the same type, a "mode" attribute is also assigned, which has one of "AND," "OR," and "?." The details of
these attributes are described in the annotation guidelines (rel_guideline.pdf).
In this corpus, a clause pair is given a discourse type and its votes as follows.
The first line represents the ID of this document, the subsequent block denotes clause IDs and clauses, and the last block denotes discourse
relations for clause pairs and their voting results. These discourse relations and voting results are the results of the second stage of
crowdsourcing. Each line is the list of a discourse relation and its votes in order of votes. For the discourse relation annotated by experts, the
discourse direction is annotated; if it is reverse order, "(逆方向)" is added to the discourse relation. The details of annotation methods and
discourse relations are described in [Kawahara et al., 2014] and the annotation guidelines (disc_guideline.pdf).
Masatsugu Hangyo, Daisuke Kawahara and Sadao Kurohashi. Building a Diverse Document Leads Corpus Annotated with Semantic
Relations, In Proceedings of the 26th Pacific Asia Conference on Language Information and Computing, pp.535-544, 2012.
http://www.aclweb.org/anthology/Y/Y12/Y12-1058.pdf
萩行正嗣, 河原大輔, 黒橋禎夫. 多様な文書の書き始めに対する意味関係タグ付きコーパスの構築とその分析, 自然言語処理, Vol.21, No.2,
pp.213-248, 2014. https://doi.org/10.5715/jnlp.21.213
Daisuke Kawahara, Yuichiro Machida, Tomohide Shibata, Sadao Kurohashi, Hayato Kobayashi and Manabu Sassano. Rapid
Development of a Corpus with Discourse Annotations using Two-stage Crowdsourcing, In Proceedings of the 25th International
Conference on Computational Linguistics, pp.269-278, 2014. http://www.aclweb.org/anthology/C/C14/C14-1027.pdf
岸本裕大, 村脇有吾, 河原大輔, 黒橋禎夫. 日本語談話関係解析:タスク設計・談話標識の自動認識・ コーパスアノテーション, 自然言語処
理, Vol.27, No.4, pp.889-931, 2020. https://doi.org/10.5715/jnlp.27.889
The creation of this corpus was supported by JSPS KAKENHI Grant Number 24300053 and JST CREST "Advanced Core Technologies for Big
Data Integration." The discourse annotations were acquired by crowdsourcing under the support of Yahoo! Japan Corporation. We deeply
appreciate their support.
Format of the corpus annotated with discourse relations
# A-ID:w201106-0001998536
1 今日とある企業のトップの話を聞くことが出来た。
2 経営者として何事も全てビジネスチャンスに変えるマインドが大切だと感じた。
3 生きていく上で追い風もあれば、
4 逆風もある。
1-2 談話関係なし:5 原因・理由:4 条件:1
3-4 原因・理由:3 談話関係なし:2 逆接:2 対比:2 目的:1
References
Acknowledgment
|
# Kyoto University Web Document Leads Corpus
## Overview
This is a Japanese text corpus that consists of lead three sentences
of web documents with various linguistic annotations. By collecting
lead three sentences of web documents, this corpus contains documents
with various genres and styles, such as news articles, encyclopedic
articles, blogs and commercial pages. It comprises approximately 5,000
documents, which correspond to 15,000 sentences.
The linguistic annotations consist of annotations of morphology, named
entities, dependencies, predicate-argument structures including zero
anaphora, coreferences, and discourse. All the annotations except
discourse annotations were given by manually modifying automatic
analyses of the morphological analyzer JUMAN and the dependency, case
structure and anaphora analyzer KNP. The discourse annotations were
given by two types of annotators; experts and crowd workers.
## Notes
This corpus consists of linguistically annotated Web documents that
have been made publicly available on the Web at some time. The corpus
is released for the purpose of contributing to the research of natural
language processing.
Since the collected documents are fragmentary, i.e., only the lead
three sentences of each Web document, we have not obtained permission
from copyright owners of the Web documents and do not provide source
information such as URL. If copyright owners of Web documents request
the addition of source information or deletion of these documents, we will
update the corpus and newly release it. In this case, please delete
the downloaded old version and replace it with the new version.
## Notes on annotation guidelines
The annotation guidelines for this corpus are written in the manuals
found in the "doc" directory. The guidelines for morphology and
dependencies are described in syn_guideline.pdf, those for
predicate-argument structures and coreferences are described in
rel_guideline.pdf, and those for discourse relations are described in
disc_guideline.pdf. The guidelines for named entities are available on
the IREX website (<http://nlp.cs.nyu.edu/irex/>).
## Distributed files
* `knp/`: the corpus annotated with morphology, named entities, dependencies, predicate-argument structures, and
coreferences
* `disc/`: the corpus annotated with discourse relations
* `org/`: the raw corpus
* `doc/`: annotation guidelines
* `id/`: document id files providing train/test split
## Statistics
| | # of documents | # of sentences | # of morphemes | # of named entities | # of predicates | # of coreferring mentions |
|-------|---------------:|---------------:|---------------:|--------------------:|----------------:|--------------------------:|
| train | 3,915 | 11,745 | 194,490 | 6,267 | 51,702 | 16,079 |
| dev | 512 | 1,536 | 22,625 | 974 | 6,139 | 1,641 |
| test | 700 | 2,100 | 35,869 | 1,122 | 9,549 | 3,074 |
| total | 5,127 | 15,381 | 252,984 | 8,363 | 67,390 | 20,794 |
## Format of the corpus annotated with annotations of morphology, named entities, dependencies, predicate-argument structures, and coreferences
Annotations of this corpus are given in the following format.
```text
# S-ID:w201106-0000010001-1
* 2D
+ 3D
太郎 たろう 太郎 名詞 6 人名 5 * 0 * 0
は は は 助詞 9 副助詞 2 * 0 * 0
* 2D
+ 2D
京都 きょうと 京都 名詞 6 地名 4 * 0 * 0
+ 3D <NE:ORGANIZATION:京都大学>
大学 だいがく 大学 名詞 6 普通名詞 1 * 0 * 0
に に に 助詞 9 格助詞 1 * 0 * 0
* -1D
+ -1D <rel type="ガ" target="太郎" sid="w201106-0000010001-1" id="0"/><rel type="ニ" target="大学" sid="w201106-0000010001-1" id="2"/>
行った いった 行く 動詞 2 * 0 子音動詞カ行促音便形 3 タ形 10
EOS
```
The first line represents the ID of this sentence. In the subsequent
lines, the lines starting with "*" denote "bunsetsu," the lines starting
with "+" denote basic phrases, and the other lines denote morphemes.
The line of morphemes is the same as the output of the morphological
analyzers, JUMAN and Juman++. This information includes surface
string, reading, lemma, part of speech (POS), fine-grained POS,
conjugate type, and conjugate form. "*" means that its field is not
available. Note that this format is slightly different from KWDLC 1.0,
which adopted the same format as Kyoto University Text Corpus 4.0.
The line starting with "*" represents "bunsetsu," which is a
conventional unit for dependency in Japanese. "Bunsetsu" consists of
one or more content words and zero or more function words. In this
line, the first numeral means the ID of its depending head. The subsequent alphabet
denotes the type of dependency relation, i.e., "D" (normal
dependency), "P" (coordination dependency), "I" (incomplete
coordination dependency), and "A" (appositive dependency).
The line starting with "+" represents a basic phrase, which is a unit
to which various relations are annotated. A basic phrase consists of
one content word and zero or more function words. Therefore, it is
equivalent to a bunsetsu or a part of a bunsetsu. In this line, the
first numeral means the ID of its depending head. The subsequent alphabet is
defined in the same way as bunsetsu. The remaining part of this line
includes the annotations of named entity and various relations.
Annotations of named entity are given in `<NE>` tags. `<NE>` has the
following four attributes: type, target, possibility, and
optional_type, which mean the class of a named entity, the string of
a named entity, possible classes for an OPTIONAL named entity, and a
type for an OPTIONAL named entity, respectively. The details of these
attributes are described in the IREX annotation guidelines.
Annotations of various relations are given in `<rel>` tags. `<rel>` has
the following four attributes: type, target, sid, and id, which mean
the name of a relation, the string of the counterpart, the sentence ID
of the counterpart, and the basic phrase ID of the counterpart,
respectively. If a basic phrase has multiple tags of the same type, a
"mode" attribute is also assigned, which has one of "AND," "OR," and
"?." The details of these attributes are described in the annotation
guidelines (rel_guideline.pdf).
## Format of the corpus annotated with discourse relations
In this corpus, a clause pair is given a discourse type and its votes as follows.
```text
# A-ID:w201106-0001998536
1 今日とある企業のトップの話を聞くことが出来た。
2 経営者として何事も全てビジネスチャンスに変えるマインドが大切だと感じた。
3 生きていく上で追い風もあれば、
4 逆風もある。
1-2 談話関係なし:5 原因・理由:4 条件:1
3-4 原因・理由:3 談話関係なし:2 逆接:2 対比:2 目的:1
```
The first line represents the ID of this document, the subsequent
block denotes clause IDs and clauses, and the last block denotes
discourse relations for clause pairs and their voting results. These
discourse relations and voting results are the results of the second
stage of crowdsourcing. Each line is the list of a discourse relation
and its votes in order of votes. For the discourse relation annotated
by experts, the discourse direction is annotated; if it is reverse order,
"(逆方向)" is added to the discourse relation. The details of annotation
methods and discourse relations are described in [Kawahara et al., 2014]
and the annotation guidelines (disc_guideline.pdf).
## References
* Masatsugu Hangyo, Daisuke Kawahara and Sadao Kurohashi. Building a Diverse Document Leads Corpus Annotated with
Semantic Relations, In Proceedings of the 26th Pacific Asia Conference on Language Information and Computing,
pp.535-544, 2012. <http://www.aclweb.org/anthology/Y/Y12/Y12-1058.pdf>
* 萩行正嗣, 河原大輔, 黒橋禎夫. 多様な文書の書き始めに対する意味関係タグ付きコーパスの構築とその分析, 自然言語処理,
Vol.21, No.2, pp.213-248, 2014. <https://doi.org/10.5715/jnlp.21.213>
* Daisuke Kawahara, Yuichiro Machida, Tomohide Shibata, Sadao Kurohashi, Hayato Kobayashi and Manabu Sassano. Rapid
Development of a Corpus with Discourse Annotations using Two-stage Crowdsourcing, In Proceedings of the 25th
International Conference on Computational Linguistics, pp.269-278,
2014. <http://www.aclweb.org/anthology/C/C14/C14-1027.pdf>
* 岸本裕大, 村脇有吾, 河原大輔, 黒橋禎夫. 日本語談話関係解析:タスク設計・談話標識の自動認識・ コーパスアノテーション,
自然言語処理, Vol.27, No.4, pp.889-931, 2020. <https://doi.org/10.5715/jnlp.27.889>
## Acknowledgment
The creation of this corpus was supported by JSPS KAKENHI Grant Number 24300053 and JST CREST "Advanced Core
Technologies for Big Data Integration." The discourse annotations were acquired by crowdsourcing under the support of
Yahoo! Japan Corporation. We deeply appreciate their support.
## Contact
If you have any questions or problems with this corpus, please send an email to nl-resource at nlp.ist.i.kyoto-u.ac.jp.
If you have a request to add source information or to delete a document in the corpus, please send an email to this mail
address.
|
[
"Morphology",
"Named Entity Recognition",
"Semantic Text Processing",
"Syntactic Parsing",
"Syntactic Text Processing",
"Tagging"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/ku-nlp/JumanDIC
|
2019-11-10T23:53:09Z
|
This repository contains source dictionary files to build dictionaries for JUMAN and Juman++.
|
ku-nlp / JumanDIC
Public
Branches
Tags
Go to file
Go to file
Code
dic
emoji
experi…
grammar
kaomoji
onoma…
scripts
userdic
webdic
wikipe…
wiktion…
.gitignore
Makefile
Makefil…
blackli…
del_we…
jppdic.…
ken2to…
ken2to…
readm…
require…
About
No description or website provided.
# nlp # dictionary # japanese
# morphological-analysis # juman
Readme
Activity
Custom properties
1 star
6 watching
1 fork
Report repository
Releases
No releases published
Packages
No packages published
Contributors
6
Languages
Perl 51.7%
Python 37.7%
Ruby 6.8%
Makefile 2.6%
Shell 1.2%
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
This repository contains source dictionary files to build
dictionaries for JUMAN and Juman++.
blacklist_entries.txt : List words that should not be
included in the final dictionary.
Overview
Blacklist
To generate a dictionary for
Juman++
make jumanpp
To generate a dictionary for
JUMAN
make -f Makefile_juman juman
README
|
# Overview
This repository contains source dictionary files to build dictionaries for JUMAN and Juman++.
## Blacklist
`blacklist_entries.txt`: List words that should not be included in the final dictionary.
## To generate a dictionary for Juman++
```bash
make jumanpp
```
## To generate a dictionary for JUMAN
```bash
make -f Makefile_juman juman
```
|
[] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/MorinoseiMorizo/jparacrawl-finetune
|
2019-11-17T05:46:57Z
|
An example usage of JParaCrawl pre-trained Neural Machine Translation (NMT) models.
|
MorinoseiMorizo / jparacrawl-finetune
Public
Branches
Tags
Go to file
Go to file
Code
docker
en-ja
ja-en
scripts
README.md
get-data.sh
preprocess.sh
This repository includes an example usage of JParaCrawl pre-trained Neural Machine Translation (NMT) models.
Our goal is to train (fine-tune) the domain-adapted NMT model in a few hours.
We wrote this document as beginner-friendly so that many people can try NMT experiments. Thus, some parts might be too easy or redundant
for experts.
In this example, we focus on fine-tuning the pre-trained model for KFTT corpus, which contains Wikipedia articles related to Kyoto. We
prepared two examples, English-to-Japanese and Japanese-to-English. We recommend you to try the English-to-Japanese example if you are
fluent Japanese speaker, otherwise Japanese-to-English since we expect you to read the MT output. In the following, we use the Japanese-to-
English example.
JParaCrawl is the largest publicly available English-Japanese parallel corpus created by NTT. In this example, we will fine-tune the model pre-
trained on JParaCrawl.
For more details about JParaCrawl, visit the official web site.
http://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/
This example uses the followings.
Python 3
PyTorch
fairseq
sentencepiece
MeCab with IPA dic
NVIDIA GPU with CUDA
For fairseq, we recommend to use the same version as we pre-trained the model.
About
An example usage of JParaCrawl
pre-trained Neural Machine
Translation (NMT) models.
www.kecl.ntt.co.jp/icl/lirg/jparacr…
Readme
Activity
103 stars
3 watching
8 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Shell 95.0%
Dockerfile 4.5%
Python 0.5%
Code
Issues
2
Pull requests
Actions
Projects
Security
Insights
JParaCrawl Fine-tuning Example
JParaCrawl
Requirements
$ cd fairseq
$ git checkout c81fed46ac7868c6d80206ff71c6f6cfe93aee22
README
We prepared the Docker container that was already installed the requisites. Use the following commands to run. Note that you can change
~/jparacrawl-experiments to the path you want to store the experimental results. This will be connected to the container as /host_disk .
First, you need to prepare the corpus and pre-trained model.
These commands will download the KFTT corpus and the pre-trained NMT model. Then will tokenize the corpus to subwords with the provided
sentencepiece models. The subword tokenized corpus is located at ./corpus/spm .
You can see that a word is tokenized into several subwords. We use subwords to reduce the vocabulary size and express a low-frequent word
as a combination of subwords. For example, the word revolutionized is tokenized into revolutionize and d .
Before fine-tuning experiments, let's try to decode (translate) a file with the pre-trained model to see how the current model works. We
prepared decode.sh that decodes the KFTT test set with the pre-trained NMT model.
We can automatically evaluate the translation results by comparing reference translations. Here, we use BLEU scores, which is the most used
evaluation matrix in the MT community. The script automatically calculates the BLEU score and save it to decode/test.log . BLEU scores
ranges 0 to 100, so this result is somewhat low.
It is also important to check outputs as well as BLEU scores. Input and output files are located on ./corpus/kftt-data-
1.0/data/orig/kyoto-test.ja and ./decode/kyoto-test.ja.true.detok .
This is just an example so the result may vary.
You can also find the reference translations at ./corpus/kftt-data-1.0/data/orig/kyoto-test.en .
Docker
$ docker pull morinoseimorizo/jparacrawl-fairseq
$ docker run -it --gpus 1 -v ~/jparacrawl-experiments:/host_disk morinoseimorizo/jparacrawl-fairseq bash
Prepare the data
$ cd /host_disk
$ git clone https://github.com/MorinoseiMorizo/jparacrawl-finetune.git # Clone the repository.
$ cd jparacrawl-finetune
$ ./get-data.sh # This script will download KFTT and sentencepiece model for pre-processing the corpus.
$ ./preprocess.sh # Split the corpus into subwords.
$ cp ./ja-en/*.sh ./ # If you try the English-to-Japanese example, use en-ja directory instead.
$ ./get-model.sh # Download the pre-trained model.
$ head -n 2 corpus/spm/kyoto-train.en
▁Known ▁as ▁Se s shu ▁( 14 20 ▁- ▁150 6) , ▁he ▁was ▁an ▁ink ▁painter ▁and ▁Zen ▁monk ▁active ▁in ▁the ▁Muromachi ▁pe
▁He ▁revolutionize d ▁the ▁Japanese ▁ink ▁painting .
Decoding with pre-trained NMT models
$ ./decode.sh
Evaluation
$ cat decode/test.log
BLEU+case.mixed+numrefs.1+smooth.exp+tok.intl+version.1.4.2 = 14.2 50.4/22.0/11.2/5.9 (BP = 0.868 ratio = 0.876 hyp_l
$ head -n4 ./corpus/kftt-data-1.0/data/orig/kyoto-test.ja
InfoboxBuddhist
道元(どうげん)は、鎌倉時代初期の禅僧。
曹洞宗の開祖。
晩年に希玄という異称も用いた。。
$ head -n4 ./decode/kyoto-test.ja.true.detok
InfoboxBuddhist
Dogen is a Zen monk from the early Kamakura period.
The founder of the Soto sect.
In his later years, he also used the heterogeneous name "Legend".
The current model mistranslated the name "Kigen" to "Legend" at line 4. Also, "heterogeneous" is not an appropriate translation. Let's see how
this could be improved by fine-tuning.
Now, let's move to fine-tuning. By fine-tuning, the model will adapt to the specific domain, KFTT. Thus, we can expect the translation accuracy
improves.
Following scripts will fine-tune the pre-trained model with the KFTT training set.
Modern GPUs can use mixed-precision training that make use of Tensor Cores, which can compute half-precision floating-point faster. If you
want to use this feature, run fine-tune_kftt_mixed.sh instead of fine-tune_kftt_fp32.sh with Volta or later generations GPUs such as
Tesla V100 or Geforce RTX 2080 Ti GPUs.
Training will take several hours to finish. We tested on single RTX 2080Ti GPU with mixed-precision training and it finished in two hours.
Training time drastically differs based on the environment, so it may take a few more hours.
Once it finished, you can find the BLEU score on the models/fine-tune/test.log . You can see the BLEU score is greatly improved by fine-
tuning.
Translated text is on ./models/fine-tune/kyoto-test.ja.true.detok .
The fine-tuned model could correctly translate line 4.
In this document, we described how to use the pre-trained model and fine-tune it with KFTT. By fine-tuning, we can obtain the domain-specific
NMT model with a low computational cost.
We listed some examples to go further for NMT beginners.
Looking into the provided scripts and find what commands are used.
Try to translate your documents with the pre-trained and fine-tuned models.
You need to edit decode.sh .
See how well the model works.
Try fine-tuning with other English-Japanese parallel corpora.
You can find the corpora from:
OPUS
A list created by Prof. Neubig.
You need to tokenize it to subwords first.
Modify preprocess.sh .
$ head -n4 ./corpus/kftt-data-1.0/data/orig/kyoto-test.en
Infobox Buddhist
Dogen was a Zen monk in the early Kamakura period.
The founder of Soto Zen
Later in his life he also went by the name Kigen.
Fine-tuning on KFTT corpus
$ nohup ./fine-tune_kftt_fp32.sh &> fine-tune.log &
$ tail -f fine-tune.log
Evaluation
$ cat models/fine-tune/test.log
BLEU+case.mixed+numrefs.1+smooth.exp+tok.intl+version.1.4.2 = 26.4 57.8/31.7/20.1/13.5 (BP = 0.992 ratio = 0.992 hyp_
$ head -n4 models/fine-tune/kyoto-test.ja.true.detok
Nickel buddhist
Dogen was a Zen priest in the early Kamakura period.
He was the founder of the Soto sect.
In his later years, he also used another name, Kigen.
Conclusion
Next steps
NMT architectures
Sequence to Sequence Learning with Neural Networks
Effective Approaches to Attention-based Neural Machine Translation
Attention Is All You Need
Subwords
Neural Machine Translation of Rare Words with Subword Units
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Corpora
JParaCrawl: A Large Scale Web-Based English-Japanese Parallel Corpus
The Kyoto Free Translation Task (KFTT)
Tools
fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Please send an issue on GitHub or contact us by email.
NTT Communication Science Laboratories
Makoto Morishita
jparacrawl-ml -a- hco.ntt.co.jp
Further reading
Contact
|
# JParaCrawl Fine-tuning Example
This repository includes an example usage of JParaCrawl pre-trained Neural Machine Translation (NMT) models.
Our goal is to train (fine-tune) the domain-adapted NMT model in a few hours.
We wrote this document as beginner-friendly so that many people can try NMT experiments.
Thus, some parts might be too easy or redundant for experts.
In this example, we focus on fine-tuning the pre-trained model for KFTT corpus, which contains Wikipedia articles related to Kyoto.
We prepared two examples, English-to-Japanese and Japanese-to-English.
We recommend you to try the English-to-Japanese example if you are fluent Japanese speaker, otherwise Japanese-to-English since we expect you to read the MT output.
In the following, we use the Japanese-to-English example.
## JParaCrawl
JParaCrawl is the largest publicly available English-Japanese parallel corpus created by NTT.
In this example, we will fine-tune the model pre-trained on JParaCrawl.
For more details about JParaCrawl, visit the official web site.
http://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/
## Requirements
This example uses the followings.
- Python 3
- [PyTorch](https://pytorch.org/)
- [fairseq](https://github.com/pytorch/fairseq)
- [sentencepiece](https://github.com/google/sentencepiece)
- [MeCab](https://taku910.github.io/mecab/) with IPA dic
- NVIDIA GPU with CUDA
For fairseq, we recommend to use the same version as we pre-trained the model.
```
$ cd fairseq
$ git checkout c81fed46ac7868c6d80206ff71c6f6cfe93aee22
```
### Docker
We prepared the Docker container that was already installed the requisites.
Use the following commands to run.
Note that you can change `~/jparacrawl-experiments` to the path you want to store the experimental results.
This will be connected to the container as `/host_disk`.
``` sh
$ docker pull morinoseimorizo/jparacrawl-fairseq
$ docker run -it --gpus 1 -v ~/jparacrawl-experiments:/host_disk morinoseimorizo/jparacrawl-fairseq bash
```
## Prepare the data
First, you need to prepare the corpus and pre-trained model.
``` sh
$ cd /host_disk
$ git clone https://github.com/MorinoseiMorizo/jparacrawl-finetune.git # Clone the repository.
$ cd jparacrawl-finetune
$ ./get-data.sh # This script will download KFTT and sentencepiece model for pre-processing the corpus.
$ ./preprocess.sh # Split the corpus into subwords.
$ cp ./ja-en/*.sh ./ # If you try the English-to-Japanese example, use en-ja directory instead.
$ ./get-model.sh # Download the pre-trained model.
```
These commands will download the KFTT corpus and the pre-trained NMT model.
Then will tokenize the corpus to subwords with the provided sentencepiece models.
The subword tokenized corpus is located at `./corpus/spm`.
``` sh
$ head -n 2 corpus/spm/kyoto-train.en
▁Known ▁as ▁Se s shu ▁( 14 20 ▁- ▁150 6) , ▁he ▁was ▁an ▁ink ▁painter ▁and ▁Zen ▁monk ▁active ▁in ▁the ▁Muromachi ▁period ▁in ▁the ▁latter ▁half ▁of ▁the ▁15 th ▁century , ▁and ▁was ▁called ▁a ▁master ▁painter .
▁He ▁revolutionize d ▁the ▁Japanese ▁ink ▁painting .
```
You can see that a word is tokenized into several subwords.
We use subwords to reduce the vocabulary size and express a low-frequent word as a combination of subwords.
For example, the word `revolutionized` is tokenized into `revolutionize` and `d`.
## Decoding with pre-trained NMT models
Before fine-tuning experiments, let's try to decode (translate) a file with the pre-trained model to see how the current model works.
We prepared `decode.sh` that decodes the KFTT test set with the pre-trained NMT model.
``` sh
$ ./decode.sh
```
### Evaluation
We can automatically evaluate the translation results by comparing reference translations.
Here, we use [BLEU](https://www.aclweb.org/anthology/P02-1040/) scores, which is the most used evaluation matrix in the MT community.
The script automatically calculates the BLEU score and save it to `decode/test.log`.
BLEU scores ranges 0 to 100, so this result is somewhat low.
``` sh
$ cat decode/test.log
BLEU+case.mixed+numrefs.1+smooth.exp+tok.intl+version.1.4.2 = 14.2 50.4/22.0/11.2/5.9 (BP = 0.868 ratio = 0.876 hyp_len = 24351 ref_len = 27790)
```
It is also important to check outputs as well as BLEU scores.
Input and output files are located on `./corpus/kftt-data-1.0/data/orig/kyoto-test.ja` and `./decode/kyoto-test.ja.true.detok`.
```
$ head -n4 ./corpus/kftt-data-1.0/data/orig/kyoto-test.ja
InfoboxBuddhist
道元(どうげん)は、鎌倉時代初期の禅僧。
曹洞宗の開祖。
晩年に希玄という異称も用いた。。
$ head -n4 ./decode/kyoto-test.ja.true.detok
InfoboxBuddhist
Dogen is a Zen monk from the early Kamakura period.
The founder of the Soto sect.
In his later years, he also used the heterogeneous name "Legend".
```
This is just an example so the result may vary.
You can also find the reference translations at `./corpus/kftt-data-1.0/data/orig/kyoto-test.en`.
```
$ head -n4 ./corpus/kftt-data-1.0/data/orig/kyoto-test.en
Infobox Buddhist
Dogen was a Zen monk in the early Kamakura period.
The founder of Soto Zen
Later in his life he also went by the name Kigen.
```
The current model mistranslated the name "Kigen" to "Legend" at line 4.
Also, "heterogeneous" is not an appropriate translation.
Let's see how this could be improved by fine-tuning.
## Fine-tuning on KFTT corpus
Now, let's move to fine-tuning.
By fine-tuning, the model will adapt to the specific domain, KFTT.
Thus, we can expect the translation accuracy improves.
Following scripts will fine-tune the pre-trained model with the KFTT training set.
``` sh
$ nohup ./fine-tune_kftt_fp32.sh &> fine-tune.log &
$ tail -f fine-tune.log
```
Modern GPUs can use [mixed-precision training](https://arxiv.org/abs/1710.03740) that make use of Tensor Cores, which can compute half-precision floating-point faster.
If you want to use this feature, run `fine-tune_kftt_mixed.sh` instead of `fine-tune_kftt_fp32.sh` with Volta or later generations GPUs such as Tesla V100 or Geforce RTX 2080 Ti GPUs.
Training will take several hours to finish.
We tested on single RTX 2080Ti GPU with mixed-precision training and it finished in two hours.
Training time drastically differs based on the environment, so it may take a few more hours.
### Evaluation
Once it finished, you can find the BLEU score on the `models/fine-tune/test.log`.
You can see the BLEU score is greatly improved by fine-tuning.
``` sh
$ cat models/fine-tune/test.log
BLEU+case.mixed+numrefs.1+smooth.exp+tok.intl+version.1.4.2 = 26.4 57.8/31.7/20.1/13.5 (BP = 0.992 ratio = 0.992 hyp_len = 27572 ref_len = 27790)
```
Translated text is on `./models/fine-tune/kyoto-test.ja.true.detok`.
``` sh
$ head -n4 models/fine-tune/kyoto-test.ja.true.detok
Nickel buddhist
Dogen was a Zen priest in the early Kamakura period.
He was the founder of the Soto sect.
In his later years, he also used another name, Kigen.
```
The fine-tuned model could correctly translate line 4.
## Conclusion
In this document, we described how to use the pre-trained model and fine-tune it with KFTT.
By fine-tuning, we can obtain the domain-specific NMT model with a low computational cost.
## Next steps
We listed some examples to go further for NMT beginners.
- Looking into the provided scripts and find what commands are used.
- Try to translate your documents with the pre-trained and fine-tuned models.
- You need to edit `decode.sh`.
- See how well the model works.
- Try fine-tuning with other English-Japanese parallel corpora.
- You can find the corpora from:
- [OPUS](http://opus.nlpl.eu/)
- [A list created by Prof. Neubig.](http://www.phontron.com/japanese-translation-data.php)
- You need to tokenize it to subwords first.
- Modify `preprocess.sh`.
## Further reading
- NMT architectures
- [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215)
- [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025)
- [Attention Is All You Need](https://arxiv.org/abs/1706.03762)
- Subwords
- [Neural Machine Translation of Rare Words with Subword Units](https://arxiv.org/abs/1508.07909)
- [SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing](https://arxiv.org/abs/1808.06226)
- Corpora
- JParaCrawl: A Large Scale Web-Based English-Japanese Parallel Corpus
- [The Kyoto Free Translation Task (KFTT)](http://www.phontron.com/kftt/)
- Tools
- [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038)
## Contact
Please send an issue on GitHub or contact us by email.
NTT Communication Science Laboratories
Makoto Morishita
jparacrawl-ml -a- hco.ntt.co.jp
|
[
"Language Models",
"Machine Translation",
"Multilinguality",
"Semantic Text Processing",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/WorksApplications/sudachi.rs
|
2019-11-23T12:04:58Z
|
SudachiPy 0.6* and above are developed as Sudachi.rs.
|
WorksApplications / sudachi.rs
Public
Branches
Tags
Go to file
Go to file
Code
.github
docs
plugin
python
resources
sudachi-cli
sudachi-fuzz
sudachi
.gitignore
CHANGELOG.md
Cargo.lock
Cargo.toml
LICENSE
README.ja.md
README.md
fetch_dictionary.sh
logo.png
update_version.sh
Rust
Rust
passing
passing
2023-12-14 UPDATE: 0.6.8 Release
Try it:
sudachi.rs is a Rust implementation of Sudachi, a Japanese morphological analyzer.
日本語 README SudachiPy Documentation
About
Sudachi in Rust 🦀 and new
generation of SudachiPy
# python # rust # segmentation # pos-tagging
# morphological-analysis # tokenization
# sudachi # nlp-libary
Readme
Apache-2.0 license
Activity
Custom properties
318 stars
7 watching
34 forks
Report repository
Releases 10
v0.6.8
Latest
on Dec 14, 2023
+ 9 releases
Sponsor this project
Learn more about GitHub Sponsors
Packages
No packages published
Contributors
11
Languages
Rust 89.1%
Python 10.1%
Other 0.8%
Code
Issues
29
Pull requests
9
Actions
Projects
Security
Insights
…
sudachi.rs - English README
pip install --upgrade 'sudachipy>=0.6.8'
Sponsor
README
Apache-2.0 license
Multi-granular Tokenization
Normalized Form
Wakati (space-delimited surface form) Output
You need sudachi.rs, default plugins, and a dictionary. (This crate don't include dictionary.)
Sudachi requires a dictionary to operate. You can download a dictionary ZIP file from WorksApplications/SudachiDict (choose one from
small , core , or full ), unzip it, and place the system_*.dic file somewhere. By the default setting file, sudachi.rs assumes that it is
placed at resources/system.dic .
TL;DR
$ git clone https://github.com/WorksApplications/sudachi.rs.git
$ cd ./sudachi.rs
$ cargo build --release
$ cargo install --path sudachi-cli/
$ ./fetch_dictionary.sh
$ echo "高輪ゲートウェイ駅" | sudachi
高輪ゲートウェイ駅 名詞,固有名詞,一般,*,*,* 高輪ゲートウェイ駅
EOS
Example
$ echo 選挙管理委員会 | sudachi
選挙管理委員会 名詞,固有名詞,一般,*,*,* 選挙管理委員会
EOS
$ echo 選挙管理委員会 | sudachi --mode A
選挙 名詞,普通名詞,サ変可能,*,*,* 選挙
管理 名詞,普通名詞,サ変可能,*,*,* 管理
委員 名詞,普通名詞,一般,*,*,* 委員
会 名詞,普通名詞,一般,*,*,* 会
EOS
$ echo 打込む かつ丼 附属 vintage | sudachi
打込む 動詞,一般,*,*,五段-マ行,終止形-一般 打ち込む
空白,*,*,*,*,*
かつ丼 名詞,普通名詞,一般,*,*,* カツ丼
空白,*,*,*,*,*
附属 名詞,普通名詞,サ変可能,*,*,* 付属
空白,*,*,*,*,*
vintage 名詞,普通名詞,一般,*,*,* ビンテージ
EOS
$ cat lemon.txt
えたいの知れない不吉な塊が私の心を始終圧えつけていた。
焦躁と言おうか、嫌悪と言おうか――酒を飲んだあとに宿酔があるように、酒を毎日飲んでいると宿酔に相当した時期がやって来る。
それが来たのだ。これはちょっといけなかった。
$ sudachi --wakati lemon.txt
えたい の 知れ ない 不吉 な 塊 が 私 の 心 を 始終 圧え つけ て い た 。
焦躁 と 言おう か 、 嫌悪 と 言おう か ― ― 酒 を 飲ん だ あと に 宿酔 が ある よう に 、 酒 を 毎日 飲ん で いる と 宿酔 に 相当 し
それ が 来 た の だ 。 これ は ちょっと いけ なかっ た 。
Setup
1. Get the source code
$ git clone https://github.com/WorksApplications/sudachi.rs.git
2. Download a Sudachi Dictionary
Convenience Script
Optionally, you can use the fetch_dictionary.sh shell script to download a dictionary and install it to resources/system.dic .
This was un-implemented and does not work currently, see #35
Specify the bake_dictionary feature to embed a dictionary into the binary. The sudachi executable will contain the dictionary binary. The
baked dictionary will be used if no one is specified via cli option or setting file.
You must specify the path the dictionary file in the SUDACHI_DICT_PATH environment variable when building. SUDACHI_DICT_PATH is relative to
the sudachi.rs directory (or absolute).
Example on Unix-like system:
$ ./fetch_dictionary.sh
3. Build
$ cargo build --release
Build (bake dictionary into binary)
# Download dictionary to resources/system.dic
$ ./fetch_dictionary.sh
# Build with bake_dictionary feature (relative path)
$ env SUDACHI_DICT_PATH=resources/system.dic cargo build --release --features bake_dictionary
# or
# Build with bake_dictionary feature (absolute path)
$ env SUDACHI_DICT_PATH=/path/to/my-sudachi.dic cargo build --release --features bake_dictionary
4. Install
sudachi.rs/ $ cargo install --path sudachi-cli/
$ which sudachi
/Users/<USER>/.cargo/bin/sudachi
$ sudachi -h
sudachi 0.6.0
A Japanese tokenizer
...
Usage as a command
$ sudachi -h
A Japanese tokenizer
Usage: sudachi [OPTIONS] [FILE] [COMMAND]
Commands:
build
Builds system dictionary
ubuild
Builds user dictionary
dump
help
Print this message or the help of the given subcommand(s)
Arguments:
[FILE]
Input text file: If not present, read from STDIN
Options:
-r, --config-file <CONFIG_FILE>
Path to the setting file in JSON format
-p, --resource_dir <RESOURCE_DIR>
Path to the root directory of resources
Columns are tab separated.
Surface
Part-of-Speech Tags (comma separated)
Normalized Form
When you add the -a ( --all ) flag, it additionally outputs
Dictionary Form
Reading Form
Dictionary ID
0 for the system dictionary
1 and above for the user dictionaries
-1 if a word is Out-of-Vocabulary (not in the dictionary)
Synonym group IDs
(OOV) if a word is Out-of-Vocabulary (not in the dictionary)
When you add -w ( --wakati ) flag, it outputs space-delimited surface instead.
Out of Vocabulary handling
Easy dictionary file install & management, similar to SudachiPy
Registration to crates.io
WorksApplications/Sudachi
WorksApplications/SudachiDict
WorksApplications/SudachiPy
-m, --mode <MODE>
Split unit: "A" (short), "B" (middle), or "C" (Named Entity) [default: C]
-o, --output <OUTPUT_FILE>
Output text file: If not present, use stdout
-a, --all
Prints all fields
-w, --wakati
Outputs only surface form
-d, --debug
Debug mode: Print the debug information
-l, --dict <DICTIONARY_PATH>
Path to sudachi dictionary. If None, it refer config and then baked dictionary
--split-sentences <SPLIT_SENTENCES>
How to split sentences [default: yes]
-h, --help
Print help (see more with '--help')
-V, --version
Print version
Output
$ echo "外国人参政権" | sudachi -a
外国人参政権 名詞,普通名詞,一般,*,*,* 外国人参政権 外国人参政権 ガイコクジンサンセイケン 0 []
EOS
echo "阿quei" | sudachipy -a
阿 名詞,普通名詞,一般,*,*,* 阿 阿 -1 [] (OOV)
quei 名詞,普通名詞,一般,*,*,* quei quei -1 [] (OOV)
EOS
$ echo "外国人参政権" | sudachi -m A -w
外国 人 参政 権
ToDo
References
Sudachi
msnoigrs/gosudachi
agatan/yoin: A Japanese Morphological Analyzer written in pure Rust
wareya/notmecab-rs: notmecab-rs is a very basic mecab clone, designed only to do parsing, not training.
Sudachi Logo
Crab illustration: Pixabay
Morphological Analyzers in Rust
Logo
|
# sudachi.rs - 日本語README
<p align="center"><img width="100" src="logo.png" alt="sudachi.rs logo"></p>
sudachi.rs は日本語形態素解析器 [Sudachi](https://github.com/WorksApplications/Sudachi) のRust実装です。
[English README](README.md) [SudachiPy Documentation](https://worksapplications.github.io/sudachi.rs/python)
## TL;DR
SudachiPyとして使うには
```bash
$ pip install --upgrade 'sudachipy>=0.6.8'
```
```bash
$ git clone https://github.com/WorksApplications/sudachi.rs.git
$ cd ./sudachi.rs
$ cargo build --release
$ cargo install --path sudachi-cli/
$ ./fetch_dictionary.sh
$ echo "高輪ゲートウェイ駅" | sudachi
高輪ゲートウェイ駅 名詞,固有名詞,一般,*,*,* 高輪ゲートウェイ駅
EOS
```
### 利用例
複数粒度での分割
```
$ echo 選挙管理委員会 | sudachi
選挙管理委員会 名詞,固有名詞,一般,*,*,* 選挙管理委員会
EOS
$ echo 選挙管理委員会 | sudachi --mode A
選挙 名詞,普通名詞,サ変可能,*,*,* 選挙
管理 名詞,普通名詞,サ変可能,*,*,* 管理
委員 名詞,普通名詞,一般,*,*,* 委員
会 名詞,普通名詞,一般,*,*,* 会
EOS
```
正規化表記
```
$ echo 打込む かつ丼 附属 vintage | sudachi
打込む 動詞,一般,*,*,五段-マ行,終止形-一般 打ち込む
空白,*,*,*,*,*
かつ丼 名詞,普通名詞,一般,*,*,* カツ丼
空白,*,*,*,*,*
附属 名詞,普通名詞,サ変可能,*,*,* 付属
空白,*,*,*,*,*
vintage 名詞,普通名詞,一般,*,*,* ビンテージ
EOS
```
分かち書き出力
```
$ cat lemon.txt
えたいの知れない不吉な塊が私の心を始終圧えつけていた。
焦躁と言おうか、嫌悪と言おうか――酒を飲んだあとに宿酔があるように、酒を毎日飲んでいると宿酔に相当した時期がやって来る。
それが来たのだ。これはちょっといけなかった。
$ sudachi --wakati lemon.txt
えたい の 知れ ない 不吉 な 塊 が 私 の 心 を 始終 圧え つけ て い た 。
焦躁 と 言おう か 、 嫌悪 と 言おう か ― ― 酒 を 飲ん だ あと に 宿酔 が ある よう に 、 酒 を 毎日 飲ん で いる と 宿酔 に 相当 し た 時期 が やっ て 来る 。
それ が 来 た の だ 。 これ は ちょっと いけ なかっ た 。
```
## セットアップ
sudachi.rs本体に加え、デフォルトで使用するプラグイン、また辞書が必要になります。※パッケージには辞書が含まれていません。
### 1. ソースコードの取得
```
$ git clone https://github.com/WorksApplications/sudachi.rs.git
```
### 2. Sudachi辞書のダウンロード
[WorksApplications/SudachiDict](https://github.com/WorksApplications/SudachiDict)から辞書のzipファイル( `small` 、 `core` 、 `full` から一つ選択)し、解凍して、必要であれば中にある `system_*.dic` ファイルをわかりやすい位置に置いてください。
デフォルトの設定ファイルでは、辞書ファイルが `resources/system.dic` に存在していると指定しています(ファイル名が `system.dic` に変わっていることに注意)。
#### ダウンロードスクリプト
上記のように手動で設置する以外に、レポジトリにあるスクリプトを使って自動的に辞書をダウンロードし `resources/system.dic` として設置することもできます。
```
$ ./fetch_dictionary.sh
```
### 3. ビルド
#### ビルド(デフォルト)
`--all` フラグを使って付属のプラグインもまとめてビルドすることができます。
```
$ cargo build --release
```
#### ビルド(辞書バイナリの埋め込み)
**以下は現在未対応となっています** https://github.com/WorksApplications/sudachi.rs/issues/35 をご参考ください。
`bake_dictionary` フィーチャーフラグを立ててビルドすることで、辞書ファイルをバイナリに埋め込むことができます。
これによってビルドされた実行ファイルは、**辞書バイナリを内包しています**。
オプションや設定ファイルで辞書が指定されなかった場合、この内包辞書が使用されます。
ビルド時、埋め込む辞書へのパスを `SUDACHI_DICT_PATH` 環境変数によって指定する必要があります。
このパスは絶対パスもしくは sudachi.rs ディレクトリからの相対パスで指定してください。
Unix-likeシステムでの例:
```sh
# resources/system.dic への辞書ダウンロード
$ ./fetch_dictionary.sh
# bake_dictionary フィーチャーフラグ付きでビルド (辞書を相対パスで指定)
$ env SUDACHI_DICT_PATH=resources/system.dic cargo build --release --features bake_dictionary
# もしくは
# bake_dictionary フィーチャーフラグ付きでビルド (辞書を絶対パスで指定)
$ env SUDACHI_DICT_PATH=/path/to/my-sudachi.dic cargo build --release --features bake_dictionary
```
### 4. インストール
```
sudachi.rs/ $ cargo install --path sudachi-cli/
$ which sudachi
/Users/<USER>/.cargo/bin/sudachi
$ sudachi -h
sudachi 0.6.0
A Japanese tokenizer
...
```
## 利用方法
```bash
$ sudachi -h
A Japanese tokenizer
Usage: sudachi [OPTIONS] [FILE] [COMMAND]
Commands:
build
Builds system dictionary
ubuild
Builds user dictionary
dump
help
Print this message or the help of the given subcommand(s)
Arguments:
[FILE]
Input text file: If not present, read from STDIN
Options:
-r, --config-file <CONFIG_FILE>
Path to the setting file in JSON format
-p, --resource_dir <RESOURCE_DIR>
Path to the root directory of resources
-m, --mode <MODE>
Split unit: "A" (short), "B" (middle), or "C" (Named Entity) [default: C]
-o, --output <OUTPUT_FILE>
Output text file: If not present, use stdout
-a, --all
Prints all fields
-w, --wakati
Outputs only surface form
-d, --debug
Debug mode: Print the debug information
-l, --dict <DICTIONARY_PATH>
Path to sudachi dictionary. If None, it refer config and then baked dictionary
--split-sentences <SPLIT_SENTENCES>
How to split sentences [default: yes]
-h, --help
Print help (see more with '--help')
-V, --version
Print version
```
### 出力
タブ区切りで出力されます。 デフォルトは以下の情報が含まれます。
- 表層形
- 品詞(コンマ区切り)
- 正規化表記
オプションで -a (--all) を指定すると以下の情報が追加されます。
- 辞書形
- 読み
- 辞書ID
- 0 システム辞書
- 1 ユーザー辞書
- -1 未知語(辞書に含まれない単語)
- 同義語グループID
- "OOV" 未知語(辞書に含まれない単語)の場合のみ
```bash
$ echo "外国人参政権" | sudachi -a
外国人参政権 名詞,普通名詞,一般,*,*,* 外国人参政権 外国人参政権 ガイコクジンサンセイケン 0 []
EOS
```
```bash
echo "阿quei" | sudachipy -a
阿 名詞,普通名詞,一般,*,*,* 阿 阿 -1 [] (OOV)
quei 名詞,普通名詞,一般,*,*,* quei quei -1 [] (OOV)
EOS
```
オプションで -w (--wakati) を指定すると代わりに表層形のみをスペース区切りで出力します。
```bash
$ echo "外国人参政権" | sudachi -m A -w
外国 人 参政 権
```
## ToDo
- [x] 未知語処理
- [ ] 簡単な辞書ファイルのインストール、管理([SudachiPyでの方式を参考に](https://github.com/WorksApplications/SudachiPy/issues/73))
- [ ] crates.io への登録
## リファレンス
### Sudachi
- [WorksApplications/Sudachi](https://github.com/WorksApplications/Sudachi)
- [WorksApplications/SudachiDict](https://github.com/WorksApplications/SudachiDict)
- [WorksApplications/SudachiPy](https://github.com/WorksApplications/SudachiPy)
- [msnoigrs/gosudachi](https://github.com/msnoigrs/gosudachi)
### Rustによる形態素解析器の実装
- [agatan/yoin: A Japanese Morphological Analyzer written in pure Rust](https://github.com/agatan/yoin)
- [wareya/notmecab-rs: notmecab-rs is a very basic mecab clone, designed only to do parsing, not training.](https://github.com/wareya/notmecab-rs)
### ロゴ
- [Sudachiのロゴ](https://github.com/WorksApplications/Sudachi/blob/develop/docs/Sudachi.png)
- カニのイラスト: [Pixabay](https://pixabay.com/ja/vectors/%E5%8B%95%E7%89%A9-%E3%82%AB%E3%83%8B-%E7%94%B2%E6%AE%BB%E9%A1%9E-%E6%B5%B7-2029728/)
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[] |
true |
https://github.com/polm/unidic-py
|
2020-01-05T07:05:06Z
|
Unidic packaged for installation via pip.
|
polm / unidic-py
Public
Branches
Tags
Go to file
Go to file
Code
doc
extras
unidic
LICEN…
LICEN…
MANIF…
READ…
dicts.json
require…
setup.cfg
setup.py
This is a version of UniDic for Contemporary Written
Japanese packaged for use with pip.
Currently it supports 3.1.0, the latest version of UniDic.
Note this will take up 770MB on disk after install. If
you want a small package, try unidic-lite.
The data for this dictionary is hosted as part of the AWS
Open Data Sponsorship Program. You can read the
announcement here.
About
Unidic packaged for installation via
pip.
# nlp # japanese # unidic
Readme
MIT, BSD-3-Clause licenses found
Activity
77 stars
3 watching
8 forks
Report repository
Releases 6
v1.1.0: UniDic 3.1.0 is n…
Latest
on Oct 10, 2021
+ 5 releases
Packages
No packages published
Contributors
2
polm Paul O'Leary McCann
shogo82148 ICHINOSE Shogo
Languages
Python 87.7%
Shell 12.3%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
unidic-py
README
MIT license
BSD-3-Clause license
After installing via pip, you need to download the
dictionary using the following command:
With fugashi or mecab-python3 unidic will be used
automatically when installed, though if you want you can
manually pass the MeCab arguments:
This has a few changes from the official UniDic release to
make it easier to use.
entries for 令和 have been added
single-character numeric and alphabetic words have
been deleted
unk.def has been modified so unknown
punctuation won't be marked as a noun
See the extras directory for details on how to replicate
the build process.
Here is a list of fields included in this edition of UniDic.
For more information see the UniDic FAQ, though not all
fields are included. For fields in the UniDic FAQ the name
given there is included. Als orefer to the description of the
field hierarchy for details.
Fields which are not applicable are usually marked with
an asterisk ( * ).
pos1, pos2, pos3, pos4: Part of speech fields. The
earlier fields are more general, the later fields are
more specific.
cType: 活用型, conjugation type. Will have a value
like 五段-ラ行.
python -m unidic download
import fugashi
import unidic
tagger = fugashi.Tagger('-d "
{}"'.format(unidic.DICDIR))
# that's it!
Differences from the Official UniDic
Release
Fields
cForm: 活用形, conjugation shape. Will have a value
like 連用形-促音便.
lForm: 語彙素読み, lemma reading. The reading of
the lemma in katakana, this uses the same format as
the kana field, not pron .
lemma: 語彙素(+語彙素細分類). The lemma is a
non-inflected "dictionary form" of a word. UniDic
lemmas sometimes include extra info or have
unusual forms, like using katakana for some place
names.
orth: 書字形出現形, the word as it appears in text,
this appears to be identical to the surface.
pron: 発音形出現形, pronunciation. This is similar to
kana except that long vowels are indicated with a ー,
so 講師 is こーし.
orthBase: 書字形基本形, the uninflected form of the
word using its current written form. For example, for
彷徨った the lemma is さ迷う but the orthBase is 彷
徨う.
pronBase: 発音形基本形, the pronunciation of the
base form. Like pron for the lemma or orthBase .
goshu: 語種, word type. Etymological category. In
order of frequency, 和, 固, 漢, 外, 混, 記号, 不明.
Defined for all dictionary words, blank for unks.
iType: 語頭変化化型, "i" is for "initial". This is the type
of initial transformation the word undergoes when
combining, for example 兵 is へ半濁 because it can
be read as べい in combination. This is available for
<2% of entries.
iForm: 語頭変化形, this is the initial form of the word
in context, such as 基本形 or 半濁音形.
fType: 語末変化化型, "f" is for "final", but otherwise
as iType. For example 医学 is ク促 because it can
change to いがっ (apparently). This is available for
<0.1% of entries.
fForm: 語末変化形, as iForm but for final
transformations.
iConType: 語頭変化結合型, initial change fusion
type. Describes phonetic change at the start of the
word in counting expressions. Only available for a
few hundred entries, mostly numbers. Values are N
followed by a letter or number; most entries with this
value are numeric.
fConType: 語末変化結合型, final change fusion type.
This is also used for counting expressions, and like
iConType it is only available for a few hundred
entries. Unlike iConType the values are very
complicated, like B1S6SjShS,B1S6S8SjShS .
type: Appears to refer to the type of the lemma. See
the details below for an overview.
Type and POS fields in unidic-cwj-202302
kana: 読みがな, this is the typical representation of a
word in katakana, unlike pron. 講師 is コウシ.
kanaBase: 仮名形基本形, this is the typical katakana
representation of the lemma.
form: 語形出現形, the form of the word as it appears.
Form groups the same word with different written
expressions together.
formBase: 語形基本形 the uninflected form of the
word. For example, the formBase オオキイ groups
its orthBase 書字形基本形 大きい and おおきい
together. Also since its casual form of the orthBase
おっきい has a different pronunciation, it is regarded
as a distinct formBase オッキイ (see the UniDic
hierarchy for details).
aType: Accent type. This is a (potentially) comma-
separated field which has the number of the mora
taking the accent in 標準語 (standard language).
When there are multiple values, more common
accent patterns come first.
aConType: This describes how the accent shifts
when the word is used in a counter expression. It
uses complicated notation.
aModType: Presumably accent related but unclear
use. Available for <25% of entries and only has 6
non-default values.
lid: 語彙表ID. A long lemma ID. This seems to be a
kind of GUID. There is usually one entry per line in
the CSV, except that half-width and full-width
variations can be combined. Example:
7821659499274752
lemma_id: 語彙素ID. A shorter lemma id, starting
from 1. This seems to be as unique as the lemma
field, so many CSV lines can share this value.
Examples of values that share an ID are: クリエイテ
ィブ, クリエイティヴ, クリエーティブ and Crea
tive.
The modern Japanese UniDic is available under the GPL,
LGPL, or BSD license, see here. UniDic is developed by
NINJAL, the National Institute for Japanese Language
and Linguistics. UniDic is copyrighted by the UniDic
Consortium and is distributed here under the terms of the
BSD License.
License
|
# unidic-py
This is a version of [UniDic](https://ccd.ninjal.ac.jp/unidic/) for
Contemporary Written Japanese packaged for use with pip.
Currently it supports 3.1.0, the latest version of UniDic. **Note this will
take up 770MB on disk after install.** If you want a small package, try
[unidic-lite](https://github.com/polm/unidic-lite).
The data for this dictionary is hosted as part of the AWS Open Data
Sponsorship Program. You can read the announcement
[here](https://aws.amazon.com/jp/blogs/news/published-unidic-mecab-on-aws-open-data/).
After installing via pip, you need to download the dictionary using the
following command:
python -m unidic download
With [fugashi](https://github.com/polm/fugashi) or
[mecab-python3](https://github.com/samurait/mecab-python3) unidic will be used
automatically when installed, though if you want you can manually pass the
MeCab arguments:
import fugashi
import unidic
tagger = fugashi.Tagger('-d "{}"'.format(unidic.DICDIR))
# that's it!
## Differences from the Official UniDic Release
This has a few changes from the official UniDic release to make it easier to use.
- entries for 令和 have been added
- single-character numeric and alphabetic words have been deleted
- `unk.def` has been modified so unknown punctuation won't be marked as a noun
See the `extras` directory for details on how to replicate the build process.
## Fields
Here is a list of fields included in this edition of UniDic. For more information see the [UniDic FAQ](https://clrd.ninjal.ac.jp/unidic/faq.html#col_name), though not all fields are included. For fields in the UniDic FAQ the name given there is included. Als orefer to the [description of the field hierarchy](https://clrd.ninjal.ac.jp/unidic/glossary.html#kaisouteki) for details.
Fields which are not applicable are usually marked with an asterisk (`*`).
- **pos1, pos2, pos3, pos4**: Part of speech fields. The earlier fields are more general, the later fields are more specific.
- **cType:** 活用型, conjugation type. Will have a value like `五段-ラ行`.
- **cForm:** 活用形, conjugation shape. Will have a value like `連用形-促音便`.
- **lForm:** 語彙素読み, lemma reading. The reading of the lemma in katakana, this uses the same format as the `kana` field, not `pron`.
- **lemma:** 語彙素(+語彙素細分類). The lemma is a non-inflected "dictionary form" of a word. UniDic lemmas sometimes include extra info or have unusual forms, like using katakana for some place names.
- **orth:** 書字形出現形, the word as it appears in text, this appears to be identical to the surface.
- **pron:** 発音形出現形, pronunciation. This is similar to kana except that long vowels are indicated with a ー, so 講師 is こーし.
- **orthBase:** 書字形基本形, the uninflected form of the word using its current written form. For example, for 彷徨った the lemma is さ迷う but the orthBase is 彷徨う.
- **pronBase:** 発音形基本形, the pronunciation of the base form. Like `pron` for the `lemma` or `orthBase`.
- **goshu:** 語種, word type. Etymological category. In order of frequency, 和, 固, 漢, 外, 混, 記号, 不明. Defined for all dictionary words, blank for unks.
- **iType:** 語頭変化化型, "i" is for "initial". This is the type of initial transformation the word undergoes when combining, for example 兵 is へ半濁 because it can be read as べい in combination. This is available for <2% of entries.
- **iForm:** 語頭変化形, this is the initial form of the word in context, such as 基本形 or 半濁音形.
- **fType:** 語末変化化型, "f" is for "final", but otherwise as iType. For example 医学 is ク促 because it can change to いがっ (apparently). This is available for <0.1% of entries.
- **fForm:** 語末変化形, as iForm but for final transformations.
- **iConType:** 語頭変化結合型, initial change fusion type. Describes phonetic change at the start of the word in counting expressions. Only available for a few hundred entries, mostly numbers. Values are N followed by a letter or number; most entries with this value are numeric.
- **fConType:** 語末変化結合型, final change fusion type. This is also used for counting expressions, and like iConType it is only available for a few hundred entries. Unlike iConType the values are very complicated, like `B1S6SjShS,B1S6S8SjShS`.
- **type:** Appears to refer to the type of the lemma. See the details below for an overview.
<details>
<summary>Type and POS fields in unidic-cwj-202302</summary>
<pre>
type,pos1,pos2,pos3,pos4
人名,名詞,固有名詞,人名,一般
他,感動詞,フィラー,*,*
他,感動詞,一般,*,*
他,接続詞,*,*,*
体,代名詞,*,*,*
体,名詞,助動詞語幹,*,*
体,名詞,普通名詞,サ変可能,*
体,名詞,普通名詞,サ変形状詞可能,*
体,名詞,普通名詞,一般,*
体,名詞,普通名詞,副詞可能,*
体,名詞,普通名詞,助数詞可能,*
体,名詞,普通名詞,形状詞可能,*
係助,助詞,係助詞,*,*
副助,助詞,副助詞,*,*
助動,助動詞,*,*,*
助動,形状詞,助動詞語幹,*,*
助数,接尾辞,名詞的,助数詞,*
名,名詞,固有名詞,人名,名
固有名,名詞,固有名詞,一般,*
国,名詞,固有名詞,地名,国
地名,名詞,固有名詞,地名,一般
姓,名詞,固有名詞,人名,姓
接助,助詞,接続助詞,*,*
接尾体,接尾辞,名詞的,サ変可能,*
接尾体,接尾辞,名詞的,一般,*
接尾体,接尾辞,名詞的,副詞可能,*
接尾用,接尾辞,動詞的,*,*
接尾相,接尾辞,形容詞的,*,*
接尾相,接尾辞,形状詞的,*,*
接頭,接頭辞,*,*,*
数,名詞,数詞,*,*
格助,助詞,格助詞,*,*
準助,助詞,準体助詞,*,*
用,動詞,一般,*,*
用,動詞,非自立可能,*,*
相,副詞,*,*,*
相,形容詞,一般,*,*
相,形容詞,非自立可能,*,*
相,形状詞,タリ,*,*
相,形状詞,一般,*,*
相,連体詞,*,*,*
終助,助詞,終助詞,*,*
補助,空白,*,*,*
補助,補助記号,一般,*,*
補助,補助記号,句点,*,*
補助,補助記号,括弧閉,*,*
補助,補助記号,括弧開,*,*
補助,補助記号,読点,*,*
補助,補助記号,AA,一般,*
補助,補助記号,AA,顔文字,*
記号,記号,一般,*,*
記号,記号,文字,*,*
</pre>
</details>
- **kana:** 読みがな, this is the typical representation of a word in katakana, unlike pron. 講師 is コウシ.
- **kanaBase:** 仮名形基本形, this is the typical katakana representation of the lemma.
- **form:** 語形出現形, the form of the word as it appears. Form groups the same word with different written expressions together.
- **formBase:** 語形基本形 the uninflected form of the word. For example, the formBase オオキイ groups its orthBase 書字形基本形 大きい and おおきい together. Also since its casual form of the orthBase おっきい has a different pronunciation, it is regarded as a distinct formBase オッキイ (see the UniDic hierarchy for details).
- **aType:** Accent type. This is a (potentially) comma-separated field which has the number of the mora taking the accent in 標準語 (standard language). When there are multiple values, more common accent patterns come first.
- **aConType:** This describes how the accent shifts when the word is used in a counter expression. It uses complicated notation.
- **aModType:** Presumably accent related but unclear use. Available for <25% of entries and only has 6 non-default values.
- **lid:** 語彙表ID. A long lemma ID. This seems to be a kind of GUID. There is usually one entry per line in the CSV, except that half-width and full-width variations can be combined. Example: 7821659499274752
- **lemma_id:** 語彙素ID. A shorter lemma id, starting from 1. This seems to be as unique as the `lemma` field, so many CSV lines can share this value. Examples of values that share an ID are: クリエイティブ, クリエイティヴ, クリエーティブ and Creative.
# License
The modern Japanese UniDic is available under the GPL, LGPL, or BSD license,
[see here](https://ccd.ninjal.ac.jp/unidic/download#unidic_bccwj). UniDic is
developed by [NINJAL](https://www.ninjal.ac.jp/), the National Institute for
Japanese Language and Linguistics. UniDic is copyrighted by the UniDic
Consortium and is distributed here under the terms of the [BSD
License](./LICENSE.unidic).
The code in this repository is not written or maintained by NINJAL. The code is
available under the MIT or WTFPL License, as you prefer.
|
[] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/chakki-works/Japanese-Company-Lexicon
|
2020-01-16T05:43:49Z
|
Japanese Company Lexicon (JCLdic)
|
chakki-works / Japanese-Company-Lexicon
Public
Branches
Tags
Go to file
Go to file
Code
models
scripts
tools
.gitignore
LICENSE
README.md
main.py
requirements.txt
This repository contains the implementation for the paper: High Coverage Lexicon for Japanese Company Name
Recognition(ANLP 2020)
We provide two kinds of format. The CSV format contains one name per line, and the MeCab format contains one record per line.
Users can directly open MeCab CSV format to check the record. The MeCab Dic format is compiled by MeCab, which can be
used as the user dictionary of MeCab. MeCab Dic usage
JCL_slim (7067216, CSV, MeCab CSV, MeCab Dic): No furigana, no extra enNames, no digital names, the name length is
longer than 2 and shorter than 30.
JCL_medium (7555163, CSV, MeCab CSV, MeCab Dic): No digital names, the name length is longer than 2 and shorter than
30.
JCL_full (8491326, CSV, MeCab CSV, MeCab Dic 1 (5,000,000), MeCab Dic 2 (3,491,326)): Contain all kinds of names. I
split the MeCab Dic into two files because MeCab cannot compile the single file due to the large file size.
Our goal is to build the enterprise knowledge graph, so we only consider the companies that conducts economic activity for
commercial purposes. These companies are denoted as Stock Company (株式会社), Limited Company (有限会社), and Limited
Liability Company (合同会社).
The full version contains all kinds of names, including digits, one character aliases, etc. These abnormal names will cause
annotation error for NER task. We recommend use the JCL_medium version or JCL_slim version.
These release versions are easier to use than the version we used in the paper. Considering the trade-off between dictionary size
and searching performance, we delete zenkaku(全角) names and only preserve the hankaku(半角) names. For example, we
delete '株式会社KADOKAWA' but preserve '株式会社KADOKAWA' . As for the normalization process, please read the Python
section in usage page.
About
No description, website, or topics
provided.
Readme
MIT license
Activity
Custom properties
94 stars
5 watching
9 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
4
Languages
Python 98.8%
Shell 1.2%
Code
Issues
Pull requests
1
Actions
Projects
Wiki
Security
Insights
Japanese Company Lexicon (JCLdic)
Download links
README
MIT license
Single Lexicon
Total Names
Unique Company Names
JCL-slim
7067216
7067216
JCL-medium
7555163
7555163
JCL-full
8491326
8491326
IPAdic
392126
16596
Juman
751185
9598
NEologd
3171530
244213
Multiple Lexicon
IPAdic-NEologd
4615340
257246
IPAdic-NEologd-JCL(medium)
12093988
7722861
See wiki page for detail usage.
Instead of downloading the data, you can even build the JCLdic from scratch by following the below instructions.
If you want to download the data by Selenium, you have to download the ChromeDriver. First check your Chrome version, and
then download the corresponding version of ChromeDriver from here.
Uncompressing ZIP file to get chromedriver , then move it to target directory:
We create JCLdic according to the original data from National Tax Agency Corporate Number Publication Site (国税庁法人番号公
表サイト). Please download the ZIP files data from the below site:
CSV形式・Unicode
Put the ZIP files to data/hojin/zip directory, and run below script to preprocess the data:
Below directories will be generated automatically, but you need to create data/hojin/zip directory manually to store the ZIP
files in the first place.
Usage
JCLdic Generation Process
Data Preparation
# conda create -n jcl python=3.6
# source activate jcl
pip install -r requirements.txt
cd $HOME/Downloads
unzip chromedriver_mac64.zip
mv chromedriver /usr/local/bin
bash scripts/download.sh
.
├── data
│ ├── corpora
│ │ ├── bccwj # raw dataset
│ │ ├── mainichi # raw dataset
│ │ └── output # processed bccwj and mainichi dataset as IBO2 format
│ ├── dictionaries
│ │ ├── ipadic # raw lexicon
Generating alias
Until now, the JCLdic is prepared.
If you want to get the MeCab format:
Below result is based on the latest version of JCLdic, which might be different with the performance of the paper reported.
Because these datasets (Mainichi, BCCWJ) are not free, you should get the datasets by yourself. After you get the datasets, put
them to data/corpora/{bccwj,mainichi} and run the below command:
If you want to compare other dictionaries, you could download it from below links and put them to
data/dictionaries/{ipadic,jumman,neologd} :
Calculate coverage:
The intrinsic evaluation is calculate how many company names in different lexicons. The best results are highlighted.
│ │ ├── neologd # raw lexicon
│ │ ├── juman # raw lexicon
│ │ └── output # processed lexicons
│ └── hojin
│ ├── csv # downloaded hojin data
│ ├── output # processed JCLdic
│ └── zip # downloaded hojin data
JCLdic Generation
bash scripts/generate_alias.sh
python tools/save_mecab_format.py
Evaluation
Datasets, dictionaries, and annotated datasets preparation
# 1 Datasets preparation
python tools/dataset_converter.py # Read data from .xml, .sgml to .tsv
python tools/dataset_preprocess.py # Generate .bio data
# ipadic
# https://github.com/taku910/mecab/tree/master/mecab-ipadic
# juman
# https://github.com/taku910/mecab/tree/master/mecab-jumandic
# neologd
# https://github.com/neologd/mecab-ipadic-neologd/blob/master/seed/mecab-user-dict-seed.20200109.csv.xz
# 2 Prepare dictionaries
python tools/dictionary_preprocess.py
# 3 Annotate datasets with different dictionaries
python tools/annotation_with_dict.py
Intrinsic Evaluation: Coverage
python tools/coverage.py
Single Lexicon
Mainichi
BCCWJ
Count
Coverage
Count
Coverage
JCL-slim
727
0.4601
419
0.4671
JCL-medium
730
0.4620
422
0.4705
JCL-full
805
0.5095
487
0.5429
IPAdic
726
0.4595
316
0.3523
Juman
197
0.1247
133
0.1483
NEologd
424
0.2684
241
0.2687
Multiple Lexicon
IPAdic-NEologd
839
0.5310
421
0.4693
IPAdic-neologd-JCL(medium)
1064
0.6734
568
0.6332
Make sure the main.py has following setting:
Run the below command:
The extrinsic evaluation is using using the NER taks to measure different lexicon performance. We annotate training set with
different lexicons, train the model (CRF and Bi-LSTM-CRF), and test on the test set. The Glod means we train the model with
true labels. The best result is highlighted.
Following table is the extrinsic evaluation result. The best results are highlighted.
Single Lexicon
Mainichi F1
BCCWJ F1
CRF
Bi-LSTM-CRF
CRF
Bi-LSTM-CRF
Gold
0.9756
0.9683
0.9273
0.8911
JCL-slim
0.8533
0.8708
0.8506
0.8484
JCL-meidum
0.8517
0.8709
0.8501
0.8526
JCL-full
0.5264
0.5792
0.5646
0.7028
Juman
0.8865
0.8905
0.8320
0.8169
IPAdic
0.9048
0.9141
0.8646
0.8334
NEologd
0.8975
0.9066
0.8453
0.8288
Multiple Lexicon
IPAdic-NEologd
0.8911
0.9074
0.8624
0.8360
IPAdic-NEologd-JCL(medium)
0.8335
0.8752
0.8530
0.8524
Extrinsic Evaluation: NER task
# main.py setting
entity_level = False
# ...
### result 1 ###
# bccwj
main(bccwj_paths, bccwj_glod, entity_level=entity_level)
# mainichi
main(mainichi_paths, mainichi_glod, entity_level=entity_level)
python main.py
The new experiment results are in the parentheses. We use the dictionary annotation as CRF feature, and the best results are
highlighted. The result shows that the dictionary feature boost the performance, especially the JCL.
Single Lexicon
Mainichi F1
BCCWJ F1
CRF
CRF
Gold
0.9756 (1)
0.9273 (1)
JCL-slim
0.8533 (0.9754)
0.8506 (0.9339)
JCL-meidum
0.8517 (0.9752)
0.8501 (0.9303)
JCL-full
0.5264 (0.9764)
0.5646 (0.9364)
Juman
0.8865 (0.9754)
0.8320 (0.9276)
IPAdic
0.9048 (0.9758)
0.8646 (0.9299)
NEologd
0.8975 (0.9750)
0.8453 (0.9282)
Multiple Lexicon
IPAdic-NEologd
0.8911 (0.9767)
0.8624 (0.9366)
IPAdic-NEologd-JCL(medium)
0.8335 (0.9759)
0.8530 (0.9334)
Make sure the main.py has following setting:
Run the below command:
The entity level result:
result1 : train the data on the labels that tagged by dictionary
result2 : add the dictionary tag as feature for CRF, use the true label for training
Single Lexicon
Mainichi F1 (CRF)
Mainichi F1 (CRF)
BCCWJ F1 (CRF)
BCCWJ F1 (CRF)
Result1
Result2
Result1
Result2
Gold
0.7826
0.5537
JCL-slim
0.1326
0.7969
0.1632
0.5892
Extra Experiment
Dictionary annotation as feature on token level
Dictionary annotation as feature on entity level
# main.py setting
entity_level = True
# ...
### result 1 ###
# bccwj
main(bccwj_paths, bccwj_glod, entity_level=entity_level)
# mainichi
main(mainichi_paths, mainichi_glod, entity_level=entity_level)
### result 2 ###
# bccwj: use dictionary as feature for CRF
crf_tagged_pipeline(bccwj_paths, bccwj_glod, entity_level=entity_level)
# mainichi: use dictionary as feature for CRF
crf_tagged_pipeline(mainichi_paths, mainichi_glod, entity_level=entity_level)
python main.py
Single Lexicon
Mainichi F1 (CRF)
Mainichi F1 (CRF)
BCCWJ F1 (CRF)
BCCWJ F1 (CRF)
JCL-meidum
0.1363
0.7927
0.1672
0.5813
JCL-full
0.0268
0.8039
0.0446
0.6205
Juman
0.0742
0.7923
0.0329
0.5661
IPAdic
0.3099
0.7924
0.1605
0.5961
NEologd
0.1107
0.7897
0.0814
0.5718
Multiple Lexicon
IPAdic-NEologd
0.2456
0.7986
0.1412
0.6187
IPAdic-NEologd-JCL(medium)
0.1967
0.8009
0.2166
0.6132
From result1 and result2 , we can see these dictionary are not suitable for annotating training label, but the dictionary feature
do improve the performance in result2 .
We first divide the result into 3 categories:
Category
Description
Evaluation
Zero
the entity not exist in the training set
Zero-shot, performance on unseen entity
One
the entity only exists once in the training set
One-shot, performance on low frequency entity
More
the entity exists many times in the training set
Training on normal data
The dataset statistics:
Dataset
BCCWJ
Mainichi
Company Samples/Sentence
1364
3027
Company Entities
1704
4664
Unique Company Entities
897
1580
Number of Unique Company
Entities Exist in Training Set
Zero: 226
One: 472
More: 199
Zero: 1440
One: 49
More: 91
The experiment results:
Single Lexicon
BCCWJ
F1(CRF)
Mainichi
F1(CRF)
Zero
One
More
Zero
One
More
Gold
0.4080
0.8211
0.9091
0.4970
0.8284
0.9353
JCL-slim
0.4748
0.8333
0.9091
0.5345
0.8075
0.9509
JCL-meidum
0.4530
0.8660
0.9091
0.5151
0.8061
0.9503
JCL-full
0.5411
0.8333
0.8933
0.5630
0.8467
0.9476
Juman
0.4506
0.7957
0.9032
0.5113
0.8655
0.9431
IPAdic
0.4926
0.8421
0.9161
0.5369
0.8633
0.9419
NEologd
0.4382
0.8454
0.9161
0.5343
0.8456
0.9359
Multiple Lexicon
IPAdic-NEologd
0.5276
0.8600
0.9091
0.5556
0.8623
0.9432
Dictionary feature for low frequency company names on entity level
Single Lexicon
BCCWJ
F1(CRF)
Mainichi
F1(CRF)
IPAdic-NEologd-JCL(medium)
0.5198
0.8421
0.8947
0.5484
0.8487
0.9476
From the result above, we can see JCLdic boost the zero-shot and one-shot performance a lot, especially on the BCCWJ
dataset.
Please use the following bibtex, when you refer JCLdic from your papers.
Citation
@INPROCEEDINGS{liang2020jcldic,
author
= {Xu Liang, Taniguchi Yasufumi and Nakayama Hiroki},
|
# Japanese Company Lexicon (JCLdic)
This repository contains the implementation for the paper: [High Coverage Lexicon for Japanese Company Name Recognition(ANLP 2020)](https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/B2-3.pdf)
## Download links
We provide two kinds of format. The `CSV` format contains one name per line, and the [MeCab format](https://gist.github.com/Kimtaro/ab137870ad4a385b2d79) contains one record per line. Users can directly open `MeCab CSV` format to check the record. The `MeCab Dic` format is compiled by MeCab, which can be used as the user dictionary of MeCab. [MeCab Dic usage](https://github.com/chakki-works/Japanese-Company-Lexicon/wiki/JCLdic-Usage)
- JCL_slim (7067216, [CSV](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_slim.csv.zip), [MeCab CSV](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_slim_mecab.csv.zip), [MeCab Dic](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_slim_mecab.dic.zip)): No furigana, no extra enNames, no digital names, the name length is longer than 2 and shorter than 30.
- JCL_medium (7555163, [CSV](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_medium.csv.zip), [MeCab CSV](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_medium_mecab.csv.zip), [MeCab Dic](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_medium_mecab.dic.zip)): No digital names, the name length is longer than 2 and shorter than 30.
- JCL_full (8491326, [CSV](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_full.csv.zip), [MeCab CSV](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_full_mecab.csv.zip), [MeCab Dic 1 (5,000,000)](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_full_mecab_1.dic.zip), [MeCab Dic 2 (3,491,326)](https://s3-ap-northeast-1.amazonaws.com/chakki.jcl.jp/public/jcl_full_mecab_2.dic.zip)): Contain all kinds of names. I split the MeCab Dic into two files because MeCab cannot compile the single file due to the large file size.
Our goal is to build the enterprise knowledge graph, so we only consider the companies that conducts economic activity for commercial purposes. These companies are denoted as Stock Company (株式会社), Limited Company (有限会社), and Limited Liability Company (合同会社).
The full version contains all kinds of names, including digits, one character aliases, etc. These abnormal names will cause annotation error for NER task. We recommend use the JCL_medium version or JCL_slim version.
These release versions are easier to use than the version we used in the paper. Considering the trade-off between dictionary size and searching performance, we delete zenkaku(全角) names and only preserve the hankaku(半角) names. For example, we delete `'株式会社KADOKAWA'` but preserve `'株式会社KADOKAWA'`. As for the normalization process, please read the Python section in [usage page](https://github.com/chakki-works/Japanese-Company-Lexicon/wiki/JCLdic-Usage).
| **Single Lexicon** | Total Names | Unique Company Names |
| -------------------------- | ----------- | -------------------- |
| JCL-slim | 7067216 | 7067216 |
| JCL-medium | 7555163 | 7555163 |
| JCL-full | 8491326 | 8491326 |
| IPAdic | 392126 | 16596 |
| Juman | 751185 | 9598 |
| NEologd | 3171530 | 244213 |
| **Multiple Lexicon** | | |
| IPAdic-NEologd | 4615340 | 257246 |
| IPAdic-NEologd-JCL(medium) | 12093988 | 7722861 |
## Usage
See [wiki](https://github.com/chakki-works/Japanese-Company-Lexicon/wiki/JCLdic-Usage) page for detail usage.
## JCLdic Generation Process
Instead of downloading the data, you can even build the JCLdic from scratch by following the below instructions.
### Data Preparation
```
# conda create -n jcl python=3.6
# source activate jcl
pip install -r requirements.txt
```
If you want to download the data by Selenium, you have to download the ChromeDriver. First check your Chrome version, and then download the corresponding version of ChromeDriver from [here](https://chromedriver.chromium.org/downloads).
Uncompressing ZIP file to get `chromedriver`, then move it to target directory:
```
cd $HOME/Downloads
unzip chromedriver_mac64.zip
mv chromedriver /usr/local/bin
```
We create JCLdic according to the original data from [National Tax Agency Corporate Number Publication Site](https://www.houjin-bangou.nta.go.jp/) (国税庁法人番号公表サイト). Please download the ZIP files data from the below site:
- [CSV形式・Unicode](https://www.houjin-bangou.nta.go.jp/download/zenken/#csv-unicode)
Put the ZIP files to `data/hojin/zip` directory, and run below script to preprocess the data:
```bash
bash scripts/download.sh
```
Below directories will be generated automatically, but you need to create `data/hojin/zip` directory manually to store the ZIP files in the first place.
```bash
.
├── data
│ ├── corpora
│ │ ├── bccwj # raw dataset
│ │ ├── mainichi # raw dataset
│ │ └── output # processed bccwj and mainichi dataset as IBO2 format
│ ├── dictionaries
│ │ ├── ipadic # raw lexicon
│ │ ├── neologd # raw lexicon
│ │ ├── juman # raw lexicon
│ │ └── output # processed lexicons
│ └── hojin
│ ├── csv # downloaded hojin data
│ ├── output # processed JCLdic
│ └── zip # downloaded hojin data
```
### JCLdic Generation
Generating alias
```bash
bash scripts/generate_alias.sh
```
Until now, the JCLdic is prepared.
If you want to get the MeCab format:
```
python tools/save_mecab_format.py
```
### Evaluation
Below result is based on the latest version of JCLdic, which might be different with the performance of the paper reported.
#### Datasets, dictionaries, and annotated datasets preparation
Because these datasets (Mainichi, BCCWJ) are not free, you should get the datasets by yourself. After you get the datasets, put them to `data/corpora/{bccwj,mainichi}` and run the below command:
```bash
# 1 Datasets preparation
python tools/dataset_converter.py # Read data from .xml, .sgml to .tsv
python tools/dataset_preprocess.py # Generate .bio data
```
If you want to compare other dictionaries, you could download it from below links and put them to `data/dictionaries/{ipadic,jumman,neologd}`:
```bash
# ipadic
# https://github.com/taku910/mecab/tree/master/mecab-ipadic
# juman
# https://github.com/taku910/mecab/tree/master/mecab-jumandic
# neologd
# https://github.com/neologd/mecab-ipadic-neologd/blob/master/seed/mecab-user-dict-seed.20200109.csv.xz
# 2 Prepare dictionaries
python tools/dictionary_preprocess.py
```
```bash
# 3 Annotate datasets with different dictionaries
python tools/annotation_with_dict.py
```
#### Intrinsic Evaluation: Coverage
Calculate coverage:
```
python tools/coverage.py
```
The intrinsic evaluation is calculate how many company names in different lexicons. The best results are highlighted.
| **Single Lexicon** | Mainichi | | BCCWJ | |
| -------------------------- | -------- | ---------- | ----- | ---------- |
| | Count | Coverage | Count | Coverage |
| JCL-slim | 727 | 0.4601 | 419 | 0.4671 |
| JCL-medium | 730 | 0.4620 | 422 | 0.4705 |
| JCL-full | 805 | **0.5095** | 487 | **0.5429** |
| IPAdic | 726 | 0.4595 | 316 | 0.3523 |
| Juman | 197 | 0.1247 | 133 | 0.1483 |
| NEologd | 424 | 0.2684 | 241 | 0.2687 |
| **Multiple Lexicon** | | | | |
| IPAdic-NEologd | 839 | 0.5310 | 421 | 0.4693 |
| IPAdic-neologd-JCL(medium) | 1064 | **0.6734** | 568 | **0.6332** |
#### Extrinsic Evaluation: NER task
Make sure the `main.py` has following setting:
```python
# main.py setting
entity_level = False
# ...
### result 1 ###
# bccwj
main(bccwj_paths, bccwj_glod, entity_level=entity_level)
# mainichi
main(mainichi_paths, mainichi_glod, entity_level=entity_level)
```
Run the below command:
```
python main.py
```
The extrinsic evaluation is using using the NER taks to measure different lexicon performance. We annotate training set with different lexicons, train the model (CRF and Bi-LSTM-CRF), and test on the test set. The `Glod` means we train the model with true labels. The best result is highlighted.
Following table is the extrinsic evaluation result. The best results are highlighted.
| Single Lexicon | Mainichi F1 | | BCCWJ F1 | |
| -------------------------- | ----------- | ----------- | ---------- | ----------- |
| | CRF | Bi-LSTM-CRF | CRF | Bi-LSTM-CRF |
| Gold | 0.9756 | 0.9683 | 0.9273 | 0.8911 |
| JCL-slim | 0.8533 | 0.8708 | 0.8506 | 0.8484 |
| JCL-meidum | 0.8517 | 0.8709 | 0.8501 | **0.8526** |
| JCL-full | 0.5264 | 0.5792 | 0.5646 | 0.7028 |
| Juman | 0.8865 | 0.8905 | 0.8320 | 0.8169 |
| IPAdic | **0.9048** | **0.9141** | **0.8646** | 0.8334 |
| NEologd | 0.8975 | 0.9066 | 0.8453 | 0.8288 |
| **Multiple Lexicon** | | | | |
| IPAdic-NEologd | **0.8911** | **0.9074** | **0.8624** | 0.8360 |
| IPAdic-NEologd-JCL(medium) | 0.8335 | 0.8752 | 0.8530 | **0.8524** |
## Extra Experiment
### Dictionary annotation as feature on token level
The new experiment results are in the parentheses. We use the dictionary annotation as CRF feature, and the best results are highlighted. The result shows that the dictionary feature boost the performance, especially the JCL.
| Single Lexicon | Mainichi F1 | BCCWJ F1 |
| -------------------------- | ------------------- | ------------------- |
| | CRF | CRF |
| Gold | 0.9756 (1) | 0.9273 (1) |
| JCL-slim | 0.8533 (0.9754) | 0.8506 (0.9339) |
| JCL-meidum | 0.8517 (0.9752) | 0.8501 (0.9303) |
| JCL-full | 0.5264 (**0.9764**) | 0.5646 (**0.9364**) |
| Juman | 0.8865 (0.9754) | 0.8320 (0.9276) |
| IPAdic | 0.9048 (0.9758) | 0.8646 (0.9299) |
| NEologd | 0.8975 (0.9750) | 0.8453 (0.9282) |
| **Multiple Lexicon** | | |
| IPAdic-NEologd | 0.8911 (**0.9767**) | 0.8624 (**0.9366**) |
| IPAdic-NEologd-JCL(medium) | 0.8335 (0.9759) | 0.8530 (0.9334) |
### Dictionary annotation as feature on entity level
Make sure the `main.py` has following setting:
```python
# main.py setting
entity_level = True
# ...
### result 1 ###
# bccwj
main(bccwj_paths, bccwj_glod, entity_level=entity_level)
# mainichi
main(mainichi_paths, mainichi_glod, entity_level=entity_level)
### result 2 ###
# bccwj: use dictionary as feature for CRF
crf_tagged_pipeline(bccwj_paths, bccwj_glod, entity_level=entity_level)
# mainichi: use dictionary as feature for CRF
crf_tagged_pipeline(mainichi_paths, mainichi_glod, entity_level=entity_level)
```
Run the below command:
```
python main.py
```
The entity level result:
- `result1` : train the data on the labels that tagged by dictionary
- `result2` : add the dictionary tag as feature for CRF, use the true label for training
| Single Lexicon | Mainichi F1 (CRF) | Mainichi F1 (CRF) | BCCWJ F1 (CRF) | BCCWJ F1 (CRF) |
| -------------------------- | ----------------- | ----------------- | -------------- | -------------- |
| | Result1 | Result2 | Result1 | Result2 |
| Gold | 0.7826 | | 0.5537 | |
| JCL-slim | 0.1326 | 0.7969 | 0.1632 | 0.5892 |
| JCL-meidum | 0.1363 | 0.7927 | **0.1672** | 0.5813 |
| JCL-full | 0.0268 | **0.8039** | 0.0446 | **0.6205** |
| Juman | 0.0742 | 0.7923 | 0.0329 | 0.5661 |
| IPAdic | **0.3099** | 0.7924 | 0.1605 | 0.5961 |
| NEologd | 0.1107 | 0.7897 | 0.0814 | 0.5718 |
| **Multiple Lexicon** | | | | |
| IPAdic-NEologd | **0.2456** | 0.7986 | 0.1412 | 0.6187 |
| IPAdic-NEologd-JCL(medium) | 0.1967 | **0.8009** | **0.2166** | 0.6132 |
From `result1` and `result2`, we can see these dictionary are not suitable for annotating training label, but the dictionary feature do improve the performance in `result2`.
### Dictionary feature for low frequency company names on entity level
<!-- Make sure the `main.py` has following setting:
```python
# main.py setting
entity_level = True
# ...
### result 3 ###
# bccwj: evaluate on low frequency company names
main(bccwj_paths, bccwj_glod, entity_level=entity_level, low_frequency=bccwj_counter)
# mainichi: evaluate on low frequency company names
main(mainichi_paths, mainichi_glod, entity_level=entity_level, low_frequency=mainichi_counter)
### result 4 ###
# bccwj: evaluate on low frequency company names, use dictionary as feature for CRF
crf_tagged_pipeline(bccwj_paths, bccwj_glod, entity_level=entity_level, low_frequency=bccwj_counter)
# mainichi: evaluate on low frequency company names, use dictionary as feature for CRF
crf_tagged_pipeline(mainichi_paths, mainichi_glod, entity_level=entity_level, low_frequency=mainichi_counter)
```
Run the below command:
```
python main.py
``` -->
We first divide the result into 3 categories:
| Category | Description | Evaluation |
| -------- | ------------------------------------------------ | --------------------------------------------- |
| Zero | the entity not exist in the training set | Zero-shot, performance on unseen entity |
| One | the entity only exists once in the training set | One-shot, performance on low frequency entity |
| More | the entity exists many times in the training set | Training on normal data |
The dataset statistics:
| Dataset | BCCWJ | Mainichi |
| ------------------------------------------------------------ | ------------------------------------ | ----------------------------------- |
| Company Samples/Sentence | 1364 | 3027 |
| Company Entities | 1704 | 4664 |
| Unique Company Entities | 897 | 1580 |
| Number of Unique Company <br />Entities Exist in Training Set | Zero: 226<br/>One: 472<br/>More: 199 | Zero: 1440<br/>One: 49<br/>More: 91 |
The experiment results:
| Single Lexicon | BCCWJ<br />F1(CRF) | | | Mainichi<br />F1(CRF) | | |
| -------------------------- | ------------------ | ---------- | ---------- | --------------------- | ---------- | ---------- |
| | Zero | One | More | Zero | One | More |
| Gold | 0.4080 | 0.8211 | 0.9091 | 0.4970 | 0.8284 | 0.9353 |
| JCL-slim | 0.4748 | 0.8333 | 0.9091 | 0.5345 | 0.8075 | **0.9509** |
| JCL-meidum | 0.4530 | **0.8660** | 0.9091 | 0.5151 | 0.8061 | 0.9503 |
| JCL-full | **0.5411** | 0.8333 | 0.8933 | **0.5630** | 0.8467 | 0.9476 |
| Juman | 0.4506 | 0.7957 | 0.9032 | 0.5113 | **0.8655** | 0.9431 |
| IPAdic | 0.4926 | 0.8421 | **0.9161** | 0.5369 | 0.8633 | 0.9419 |
| NEologd | 0.4382 | 0.8454 | 0.9161 | 0.5343 | 0.8456 | 0.9359 |
| **Multiple Lexicon** | | | | | | |
| IPAdic-NEologd | 0.5276 | **0.8600** | 0.9091 | **0.5556** | **0.8623** | 0.9432 |
| IPAdic-NEologd-JCL(medium) | **0.5198** | 0.8421 | 0.8947 | 0.5484 | 0.8487 | **0.9476** |
From the result above, we can see JCLdic boost the zero-shot and one-shot performance a lot, especially on the BCCWJ dataset.
<!-- ### (Extra) Dictionary feature for low frequency company names on entity level
We could further divide these 3 categories to 6 categories:
| Category | Description | Evaluation |
| -------- | ------------------------------------------------------------ | --------------------------------------------- |
| 0-1 | Not shown in training set, but only shown once in test set | Zero-shot, performance on unseen entity |
| 0-2 | Not shown in training set, but shown more than 2 times in test set | Zero-shot, performance on unseen entity |
| 1-1 | Shown once in training set, and also shown once in test set | One-shot, performance on low frequency entity |
| 1-2 | Shown once in training set, and shown more than 2 times in test set | One-shot, performance on low frequency entity |
| 2-1 | Shown more than 2 times in training set, but only shown once in test set | Training on normal data |
| 2-2 | Shown more than 2 times in training set, and also shown more than 2 times in test set | Training on normal data |
| Single Lexicon | BCCWJ<br />F1(CRF) | | | | | |
| -------------------------- | ------------------ | ---------- | ---------- | ---------- | ---------- | ------ |
| | 0-1 | 0-2 | 1-1 | 1-2 | 2-1 | 2-2 |
| Gold | 0.6512 | 0.6880 | 0.9091 | 0.8197 | 0.8387 | 0.9173 |
| JCL-slim | 0.6931 | 0.6753 | 0.9032 | 0.8182 | 0.8387 | 0.8714 |
| JCL-meidum | 0.6872 | 0.6438 | **0.9091** | **0.8485** | **0.8387** | 0.8905 |
| JCL-full | **0.7097** | 0.6842 | 0.8571 | 0.7879 | 0.8276 | 0.8406 |
| Juman | 0.6413 | 0.7059 | 0.9032 | 0.8000 | 0.8387 | 0.8611 |
| IPAdic | 0.6802 | **0.7081** | 0.9032 | 0.8060 | 0.8387 | 0.8671 |
| NEologd | 0.6630 | 0.6621 | **0.9091** | 0.8060 | 0.8387 | 0.8732 |
| **Multiple Lexicon** | | | | | | |
| IPAdic-NEologd | 0.6957 | **0.6957** | **0.9143** | **0.8308** | 0.8387 | 0.8299 |
| IPAdic-NEologd-JCL(medium) | **0.7440** | 0.6914 | 0.8824 | 0.8254 | 0.8387 | 0.8741 |
For BCCWJ dataset, after adding dictionary features, JCL-full boosts the f1 from 0.6512 to 0.7097 for 0-1.
| Single Lexicon | Mainichi<br />F1(CRF) | | | | | |
| -------------------------- | --------------------- | ---------- | ---------- | ---------- | ---------- | ---------- |
| | 0-1 | 0-2 | 1-1 | 1-2 | 2-1 | 2-2 |
| Gold | 0.4837 | 0.3848 | 0.4921 | 0.5862 | 0.5354 | 0.5881 |
| JCL-slim | 0.4831 | **0.4244** | 0.4974 | 0.5702 | 0.5344 | **0.5885** |
| JCL-meidum | 0.4784 | 0.4043 | 0.5109 | 0.5780 | **0.5385** | 0.5857 |
| JCL-full | 0.4959 | 0.4169 | **0.5204** | 0.5785 | 0.5217 | 0.5845 |
| Juman | 0.4978 | 0.3777 | 0.4813 | **0.6111** | 0.5211 | 0.5842 |
| IPAdic | 0.4920 | 0.3890 | 0.4923 | 0.5992 | 0.5275 | 0.5760 |
| NEologd | **0.5052** | 0.3832 | 0.5000 | 0.5917 | 0.5267 | 0.5805 |
| **Multiple Lexicon** | | | | | | |
| IPAdic-NEologd | 0.5069 | 0.4000 | 0.5078 | **0.5827** | 0.5191 | 0.5778 |
| IPAdic-NEologd-JCL(medium) | **0.5072** | **0.4115** | **0.5078** | 0.5774 | **0.5308** | 0.5799 |
For Mainichi dataset, after adding dictionary features, JCL-slim boosts the f1 from 0.3848 to 0.4244 for 0-2, and JCL-full boosts the f1 from 0.4921 to 0.5204. -->
## Citation
Please use the following bibtex, when you refer JCLdic from your papers.
```
@INPROCEEDINGS{liang2020jcldic,
author = {Xu Liang, Taniguchi Yasufumi and Nakayama Hiroki},
title = {High Coverage Lexicon for Japanese Company Name Recognition},
booktitle = {Proceedings of the Twenty-six Annual Meeting of the Association for Natural Language Processing},
year = {2020},
pages = {NLP2020-B2-3},
publisher = {The Association for Natural Language Processing},
}
```
|
[
"Information Extraction & Text Mining",
"Term Extraction"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/PKSHATechnology-Research/camphr
|
2020-02-10T03:39:58Z
|
NLP libary for creating pipeline components
|
PKSHATechnology-Research / camphr
Public
Branches
Tags
Go to file
Go to file
Code
.github
docker…
docs
img
packag…
scripts
typings
.devco…
.docke…
.gitignore
.gitmo…
.grenrc…
.pre-co…
CONT…
LICEN…
READ…
pyproj…
readth…
test_d…
test_lo…
About
Camphr - NLP libary for creating
pipeline components
camphr.readthedocs.io/en/latest/
# spacy # spacy-extension
Readme
Apache-2.0 license
Activity
Custom properties
340 stars
6 watching
17 forks
Report repository
Releases 21
0.7.0
Latest
on Aug 21, 2020
+ 20 releases
Packages
No packages published
Contributors
8
Languages
Python 99.8%
Other 0.2%
Code
Issues
2
Pull requests
2
Actions
Projects
Security
Insights
README
Apache-2.0 license
docs
docs failing
failing chat
chat
on gitter
on gitter
Check the documentation for more information.
Camphr is licensed under Apache 2.0.
Camphr - NLP libary for
creating pipeline
components
License
|
<p align="center"><img src="https://raw.githubusercontent.com/PKSHATechnology-Research/camphr/master/img/logoc.svg?sanitize=true" width="200" /></p>
# Camphr - NLP libary for creating pipeline components
[](https://camphr.readthedocs.io/en/latest/?badge=latest)
[](https://gitter.im/camphr/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[](https://badge.fury.io/py/camphr)
Check the [documentation](https://camphr.readthedocs.io/en/latest/) for more information.
# License
Camphr is licensed under [Apache 2.0](./LICENSE).
|
[
"Semantic Text Processing",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/ku-nlp/text-cleaning
|
2020-02-10T06:25:12Z
|
A powerful text cleaner for Japanese web texts
|
ku-nlp / text-cleaning
Public
Branches
Tags
Go to file
Go to file
Code
src/text_clea…
tests
.gitignore
LICENSE
Makefile
README.md
poetry.lock
pyproject.toml
This project cleans dirty Japanese texts, which include a lot of emoji and kaomoji in a whitelist method.
About
A powerful text cleaner for
Japanese web texts
# python # cleaner # text-preprocessing
Readme
MIT license
Activity
Custom properties
12 stars
1 watching
4 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
3
Taka008 Takashi Kodama
nobu-g Nobuhiro Ueda
dependabot[bot]
Languages
Python 86.8%
Makefile 13.2%
Code
Issues
2
Pull requests
Actions
Projects
Security
Insights
text-cleaning: A Japanese powerful text cleaner
Description
Cleaning Example
INPUT: これはサンプルです(≧∇≦*)!見てみて→http://a.bc/defGHIjkl
OUTPUT: これはサンプルです!見てみて。
INPUT: 一緒に応援してるよ(o^^o)。ありがとう😃
OUTPUT: 一緒に応援してるよ。ありがとう。
INPUT: いいぞ〜⸜(* ॑꒳ ॑* )⸝⋆*
OUTPUT: いいぞ。
INPUT: えっ((((;゚Д゚)))))))
OUTPUT: えっ。
INPUT: 確かに「嘘でしょww」って笑ってたね
OUTPUT: 確かに「嘘でしょ。」って笑ってたね。
INPUT: おはようございますヽ(*´∀`)ノ。。今日は雨ですね・・・・・(T_T)
OUTPUT: おはようございます。今日は雨ですね。
README
MIT license
Python 3.7+
mojimoji
neologdn
joblib
When input files are located in directories hierarchically you can clean them keeping directory structure by using
makefile. If input is compressed files, Makefile detect their format from their suffix and output cleaned files in the
same format.
Options:
FILE_FORMAT=txt: Format of input file (txt or csv or tsv)
NUM_JOBS_PER_MACHINE=10: The maximum number of concurrently running jobs per machine
TWITTER=1: Perform twitter specific cleaning
PYTHON: Path to python interpreter of virtual environment
INPUT: (灬º﹃º灬)おいしそうです♡
OUTPUT: おいしそうです。
INPUT: 今日の夜、友達とラーメン行くよ(((o(*゚▽゚*)o)))
OUTPUT: 今日の夜、友達とラーメン行くよ。
# When using the twitter option.
INPUT: @abcde0123 おっとっとwwそうでした✋!!よろしくお願いします♪‼ #挨拶
OUTPUT: おっとっと。そうでした!よろしくお願いします。
Requirements
How to Run
Using python script directly
cat input.txt | python src/text_cleaning/main.py <options> > output.txt
Using makefile
make INPUT_DIR=/somewhere/in OUTPUT_DIR=/somewhere/out PYTHON=/somewhere/.venv/bin/python
|
<p align="center"><img src="https://raw.githubusercontent.com/PKSHATechnology-Research/camphr/master/img/logoc.svg?sanitize=true" width="200" /></p>
# Camphr - NLP libary for creating pipeline components
[](https://camphr.readthedocs.io/en/latest/?badge=latest)
[](https://gitter.im/camphr/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[](https://badge.fury.io/py/camphr)
Check the [documentation](https://camphr.readthedocs.io/en/latest/) for more information.
# License
Camphr is licensed under [Apache 2.0](./LICENSE).
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/yagays/nayose-wikipedia-ja
|
2020-03-09T00:17:38Z
|
Wikipediaから作成した日本語名寄せデータセット
|
yagays / nayose-wikipedia-ja
Public
Branches
Tags
Go to file
Go to file
Code
data
.gitignore
READ…
make_…
parse_…
test_uti…
util.py
Wikipediaより作成した日本語の名寄せデータセットです。
Wikipediaを元にした日本語の名寄せデータセットを作成し
ました - Sansan Builders Box
エンティティ数:1,054,856
メンション数:2,213,547
名前
データ数
エンティティ数
メンション数
train
447,071
632,913
322,583
dev
146,153
210,971
118,776
About
Wikipediaから作成した日本語名寄
せデータセット
Readme
Activity
34 stars
2 watching
0 forks
Report repository
Releases 1
v1.0
Latest
on Mar 10, 2020
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
nayose-wikipedia-ja
概要
統計
README
名前
データ数
エンティティ数
メンション数
test
149,409
210,972
120,397
[1907 10165v1] Optimal Transport-based Alignment of
参考
|
# nayose-wikipedia-ja
## 概要
Wikipediaより作成した日本語の名寄せデータセットです。
[Wikipediaを元にした日本語の名寄せデータセットを作成しました \- Sansan Builders Box](https://buildersbox.corp-sansan.com/entry/2020/03/10/110000)
## 統計
- エンティティ数:1,054,856
- メンション数:2,213,547
| 名前 | データ数 | エンティティ数 | メンション数 |
| :---- | :------ | :------ | :------ |
| `train` | 447,071 | 632,913 | 322,583 |
| `dev` | 146,153 | 210,971 | 118,776 |
| `test` | 149,409 | 210,972 | 120,397 |
## 参考
- [\[1907\.10165v1\] Optimal Transport\-based Alignment of Learned Character Representations for String Similarity](https://arxiv.org/abs/1907.10165v1)
|
[
"Information Extraction & Text Mining",
"Named Entity Recognition",
"Term Extraction"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/verypluming/JapaneseNLI
|
2020-03-10T13:43:23Z
|
Google Colabで日本語テキスト推論を試す
|
verypluming / JapaneseNLI
Public
Branches
Tags
Go to file
Go to file
Code
.gitignore
Japan…
Japan…
LICEN…
READ…
train.tsv
Google Colabで日本語テキスト推論を試す
含意関係認識(Recognizing Textual Entailment, RTE)
または自然言語推論・テキスト推論(Natural Language
Inference)は、以下の例のように、ある前提文に対して
仮説文が推論できるか否かを判定する自然言語処理のタ
スクです。
About
Google Colabで日本語テキスト推
論を試す
Readme
Apache-2.0 license
Activity
6 stars
2 watching
1 fork
Report repository
Releases
No releases published
Packages
No packages published
Languages
Jupyter Notebook 100.0%
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
JapaneseNLI
概要
前提文: 太郎は花子が山頂まで登っている間に、
山頂まで登った。
仮説文: 太郎は花子が山頂まで登る前に、山頂ま
で登った。
正解ラベル: 含意 (entailment)
前提文: 太郎は花子が山頂まで登る前に、山頂ま
で登った。
仮説文: 太郎は花子が山頂まで登っている間に、
山頂まで登った。
正解ラベル: 非含意 (neutral)
README
Apache-2.0 license
JapaneseBERT_NLI.ipynb: Transformersライブラリの
BERTとGoogle Colabを用いて日本語テキスト推論を試
せるコードです。
JapaneseXLM_NLI.ipynb: Transformersライブラリの
XLMとGoogle Colabを用いて日本語テキスト推論を試
せるコードです。
ファインチューニング用の学習データがある場合は、一
行目はタブ区切りでpremise, hypothesis, gold_labelと記
述し、二行目以降にタブ区切りで前提文、仮説文、正解
ラベル(entailment, contradiction, neutralの3値)が書かれ
たtrain.tsvファイルを用意して、Google Driveにアップ
ロードしてください。 train.tsvのサンプル
Hitomi Yanaka [email protected]
Apache License
前提文: 太郎は花子が山頂まで登る前に、山頂ま
で登った。
仮説文: 太郎は花子が山頂まで登った後に、山頂
まで登った。
正解ラベル: 矛盾 (contradiction)
学習データの用意
Contact
License
|
# JapaneseNLI
Google Colabで日本語テキスト推論を試す
## 概要
含意関係認識(Recognizing Textual Entailment, RTE)または自然言語推論・テキスト推論(Natural Language Inference)は、以下の例のように、ある前提文に対して仮説文が推論できるか否かを判定する自然言語処理のタスクです。
```
前提文: 太郎は花子が山頂まで登っている間に、山頂まで登った。
仮説文: 太郎は花子が山頂まで登る前に、山頂まで登った。
正解ラベル: 含意 (entailment)
前提文: 太郎は花子が山頂まで登る前に、山頂まで登った。
仮説文: 太郎は花子が山頂まで登っている間に、山頂まで登った。
正解ラベル: 非含意 (neutral)
前提文: 太郎は花子が山頂まで登る前に、山頂まで登った。
仮説文: 太郎は花子が山頂まで登った後に、山頂まで登った。
正解ラベル: 矛盾 (contradiction)
```
[JapaneseBERT_NLI.ipynb](https://github.com/verypluming/JapaneseNLI/blob/master/JapaneseBERT_NLI.ipynb):
[Transformers](https://github.com/huggingface/transformers)ライブラリのBERTとGoogle Colabを用いて日本語テキスト推論を試せるコードです。
[JapaneseXLM_NLI.ipynb](https://github.com/verypluming/JapaneseNLI/blob/master/JapaneseXLM_NLI.ipynb):
[Transformers](https://github.com/huggingface/transformers)ライブラリのXLMとGoogle Colabを用いて日本語テキスト推論を試せるコードです。
## 学習データの用意
ファインチューニング用の学習データがある場合は、一行目はタブ区切りでpremise, hypothesis, gold_labelと記述し、二行目以降にタブ区切りで前提文、仮説文、正解ラベル(entailment, contradiction, neutralの3値)が書かれたtrain.tsvファイルを用意して、[Google Drive](https://www.google.co.jp/drive/apps.html)にアップロードしてください。
[train.tsvのサンプル](https://github.com/verypluming/JapaneseNLI/blob/master/train.tsv)
## Contact
Hitomi Yanaka [email protected]
## License
Apache License
|
[
"Reasoning",
"Semantic Text Processing",
"Textual Inference"
] |
[] |
true |
https://github.com/Tzawa/google-vs-deepl-je
|
2020-03-22T10:45:10Z
|
google-vs-deepl-je
|
Tzawa / google-vs-deepl-je
Public
Branches
Tags
Go to file
Go to file
Code
Pipfile
READ…
devtest…
devtest…
devtest…
devtest…
devtest…
devtest…
eval.py
jparacr…
jparacr…
jparacr…
jparacr…
jparacr…
jparacr…
About
No description, website, or topics
provided.
Readme
Activity
4 stars
1 watching
1 fork
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
google-vs-deepl-je
pipenv install
pipenv run python eval.py devtest.en-ja.google devtest.ja -l ja
pipenv run python eval.py devtest.en-ja.deepl devtest.ja -l ja
README
Ja -> En (as of March 22, 2020)
DATA
Google
DeepL
ASPEC (devtest)
24.030
20.431
JParaCrawl
25.819
26.833
En -> Ja (as of March 22, 2020)
DATA
Google
DeepL
ASPEC (devtest)
28.554
36.244
JParaCrawl
25.554
27.048
pipenv run python eval.py devtest.ja-en.google devtest.en -l en
pipenv run python eval.py devtest.ja-en.deepl devtest.en -l en
|
# google-vs-deepl-je
```sh
pipenv install
pipenv run python eval.py devtest.en-ja.google devtest.ja -l ja
pipenv run python eval.py devtest.en-ja.deepl devtest.ja -l ja
pipenv run python eval.py devtest.ja-en.google devtest.en -l en
pipenv run python eval.py devtest.ja-en.deepl devtest.en -l en
```
Ja -> En (as of March 22, 2020)
|DATA|Google|DeepL|
| ------------- | ------------- | ------------- |
|ASPEC (devtest)|**24.030**|20.431|
|JParaCrawl|25.819|**26.833**|
En -> Ja (as of March 22, 2020)
|DATA|Google|DeepL|
| ------------- | ------------- | ------------- |
|ASPEC (devtest)|28.554|**36.244**|
|JParaCrawl|25.554|**27.048**|
|
[
"Machine Translation",
"Multilinguality",
"Text Generation"
] |
[] |
true |
https://github.com/polm/unidic-lite
|
2020-04-07T07:24:00Z
|
A small version of UniDic for easy pip installs.
|
polm / unidic-lite
Public
Branches
Tags
Go to file
Go to file
Code
unidic_…
.gitignore
LICEN…
LICEN…
READ…
setup.py
This is a version of unidic-py that is designed to be
installable with pip alone, not requiring any extra
downloads.
At the moment it uses Unidic 2.1.2, from 2013, which is
the most recent release of UniDic that's small enough to
be distributed via PyPI.
Note this package takes roughly 250MB on disk after
being installed.
In order to use this you will need to install a MeCab
wrapper such as mecab-python3 or fugashi.
About
A small version of UniDic for easy
pip installs.
# nlp # japanese # unidic
Readme
MIT, BSD-3-Clause licenses found
Activity
38 stars
3 watching
3 forks
Report repository
Releases
1 tags
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Unidic Lite
README
MIT license
BSD-3-Clause license
This has a few changes from the official UniDic release to
make it easier to use.
entries for 令和 have been added
single-character numeric and alphabetic words have
been deleted
unk.def has been modified so unknown
punctuation won't be marked as a noun
This code is licensed under the MIT or WTFPL license as
Differences from the Official UniDic
Release
License
|
[](https://pypi.org/project/unidic-lite/)
# Unidic Lite
This is a version of [unidic-py](https://github.com/polm/unidic-py) that is
designed to be installable with pip alone, not requiring any extra downloads.
At the moment it uses Unidic 2.1.2, from 2013, which is the most recent release
of UniDic that's small enough to be distributed via PyPI.
**Note this package takes roughly 250MB on disk after being installed.**
In order to use this you will need to install a MeCab wrapper such as
[mecab-python3](https://github.com/SamuraiT/mecab-python3) or
[fugashi](https://github.com/polm/fugashi).
## Differences from the Official UniDic Release
This has a few changes from the official UniDic release to make it easier to use.
- entries for 令和 have been added
- single-character numeric and alphabetic words have been deleted
- `unk.def` has been modified so unknown punctuation won't be marked as a noun
## License
This code is licensed under the MIT or WTFPL license, as you prefer. Unidic
2.1.2 is copyright the UniDic Consortium and distributed under the terms of the
[BSD license](./LICENSE.unidic).
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/polm/cutlet
|
2020-04-16T14:00:34Z
|
Japanese to romaji converter in Python
|
polm / cutlet
Public
4 Branches
37 Tags
Go to file
Go to file
Code
polm Add to gitignore
9574fad · 5 months ago
.github
Remove 3.7, …
7 months ago
cutlet
Add flag for to…
7 months ago
docs
Update Strea…
10 months ago
.gitignore
Add to gitignore
5 months ago
LICEN…
Add MIT Lice…
4 years ago
MANIF…
Fix inclusion …
4 years ago
READ…
Update Strea…
10 months ago
cutlet.p…
Initial commit
4 years ago
pyproj…
Add setuptool…
4 years ago
pytest.ini
Initial commit
4 years ago
require…
Add requirem…
last year
setup.py
Bump minimu…
7 months ago
About
Japanese to romaji converter in
Python
polm.github.io/cutlet/
# nlp # japanese # romaji
Readme
MIT license
Activity
305 stars
7 watching
21 forks
Report repository
Releases 4
v0.3.0: Token-aligned ro…
Latest
on Oct 11, 2023
+ 3 releases
Sponsor this project
polm Paul O'Leary McCann
Learn more about GitHub Sponsors
Contributors
7
Code
Issues
6
Pull requests
2
Discussions
Actions
Projects
Secur
cutlet
Sponsor
README
MIT license
Cutlet is a tool to convert Japanese to romaji. Check out the
interactive demo! Also see the docs and the original blog post.
issueを英語で書く必要はありません。
Features:
support for Modified Hepburn, Kunreisiki, Nihonsiki systems
custom overrides for individual mappings
custom overrides for specific words
built in exceptions list (Tokyo, Osaka, etc.)
uses foreign spelling when available in UniDic
proper nouns are capitalized
slug mode for url generation
Things not supported:
traditional Hepburn n-to-m: Shimbashi
macrons or circumflexes: Tōkyō, Tôkyô
passport Hepburn: Satoh (but you can use an exception)
hyphenating words
Traditional Hepburn in general is not supported
Internally, cutlet uses fugashi, so you can use the same
dictionary you use for normal tokenization.
Cutlet can be installed through pip as usual.
Note that if you don't have a MeCab dictionary installed you'll
also have to install one. If you're just getting started unidic-lite is
a good choice.
A command-line script is included for quick testing. Just use
cutlet and each line of stdin will be treated as a sentence. You
can specify the system to use ( hepburn , kunrei , nippon , or
nihon ) as the first argument.
Languages
Python 100.0%
Installation
pip install cutlet
pip install unidic-lite
Usage
In code:
kakasi: Historically important, but not updated since 2014.
pykakasi: self contained, it does segmentation on its own
and uses its own dictionary.
kuroshiro: Javascript based.
kana: Go based.
$ cutlet
ローマ字変換プログラム作ってみた。
Roma ji henkan program tsukutte mita.
import cutlet
katsu = cutlet.Cutlet()
katsu.romaji("カツカレーは美味しい")
# => 'Cutlet curry wa oishii'
# you can print a slug suitable for urls
katsu.slug("カツカレーは美味しい")
# => 'cutlet-curry-wa-oishii'
# You can disable using foreign spelling too
katsu.use_foreign_spelling = False
katsu.romaji("カツカレーは美味しい")
# => 'Katsu karee wa oishii'
# kunreisiki, nihonsiki work too
katu = cutlet.Cutlet('kunrei')
katu.romaji("富士山")
# => 'Huzi yama'
# comparison
nkatu = cutlet.Cutlet('nihon')
sent = "彼女は王への手紙を読み上げた。"
katsu.romaji(sent)
# => 'Kanojo wa ou e no tegami wo yomiageta.'
katu.romaji(sent)
# => 'Kanozyo wa ou e no tegami o yomiageta.'
nkatu.romaji(sent)
# => 'Kanozyo ha ou he no tegami wo yomiageta.'
Alternatives
|
[](https://polm-cutlet-demo-demo-0tur8v.streamlit.app/)
[](https://pypi.org/project/cutlet/)
# cutlet
<img src="https://github.com/polm/cutlet/raw/master/cutlet.png" width=125 height=125 alt="cutlet by Irasutoya" />
Cutlet is a tool to convert Japanese to romaji. Check out the [interactive demo][demo]! Also see the [docs](https://polm.github.io/cutlet/cutlet.html) and the [original blog post](https://www.dampfkraft.com/nlp/cutlet-python-romaji-converter.html).
[demo]: https://polm-cutlet-demo-demo-0tur8v.streamlit.app/
**issueを英語で書く必要はありません。**
Features:
- support for [Modified Hepburn](https://en.wikipedia.org/wiki/Hepburn_romanization), [Kunreisiki](https://en.wikipedia.org/wiki/Kunrei-shiki_romanization), [Nihonsiki](https://en.wikipedia.org/wiki/Nihon-shiki_romanization) systems
- custom overrides for individual mappings
- custom overrides for specific words
- built in exceptions list (Tokyo, Osaka, etc.)
- uses foreign spelling when available in UniDic
- proper nouns are capitalized
- slug mode for url generation
Things not supported:
- traditional Hepburn n-to-m: Shimbashi
- macrons or circumflexes: Tōkyō, Tôkyô
- passport Hepburn: Satoh (but you can use an exception)
- hyphenating words
- Traditional Hepburn in general is not supported
Internally, cutlet uses [fugashi](https://github.com/polm/fugashi), so you can
use the same dictionary you use for normal tokenization.
## Installation
Cutlet can be installed through pip as usual.
pip install cutlet
Note that if you don't have a MeCab dictionary installed you'll also have to
install one. If you're just getting started
[unidic-lite](https://github.com/polm/unidic-lite) is a good choice.
pip install unidic-lite
## Usage
A command-line script is included for quick testing. Just use `cutlet` and each
line of stdin will be treated as a sentence. You can specify the system to use
(`hepburn`, `kunrei`, `nippon`, or `nihon`) as the first argument.
$ cutlet
ローマ字変換プログラム作ってみた。
Roma ji henkan program tsukutte mita.
In code:
```python
import cutlet
katsu = cutlet.Cutlet()
katsu.romaji("カツカレーは美味しい")
# => 'Cutlet curry wa oishii'
# you can print a slug suitable for urls
katsu.slug("カツカレーは美味しい")
# => 'cutlet-curry-wa-oishii'
# You can disable using foreign spelling too
katsu.use_foreign_spelling = False
katsu.romaji("カツカレーは美味しい")
# => 'Katsu karee wa oishii'
# kunreisiki, nihonsiki work too
katu = cutlet.Cutlet('kunrei')
katu.romaji("富士山")
# => 'Huzi yama'
# comparison
nkatu = cutlet.Cutlet('nihon')
sent = "彼女は王への手紙を読み上げた。"
katsu.romaji(sent)
# => 'Kanojo wa ou e no tegami wo yomiageta.'
katu.romaji(sent)
# => 'Kanozyo wa ou e no tegami o yomiageta.'
nkatu.romaji(sent)
# => 'Kanozyo ha ou he no tegami wo yomiageta.'
```
## Alternatives
- [kakasi](http://kakasi.namazu.org/index.html.ja): Historically important, but not updated since 2014.
- [pykakasi](https://github.com/miurahr/pykakasi): self contained, it does segmentation on its own and uses its own dictionary.
- [kuroshiro](https://github.com/hexenq/kuroshiro): Javascript based.
- [kana](https://github.com/gojp/kana): Go based.
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/sociocom/DNorm-J
|
2020-05-07T04:47:42Z
|
Japanese version of DNorm
|
sociocom / DNorm-J
Public
5 Branches
0 Tags
Go to file
Go to file
Code
shuntaroy add simple test
265ba7c · 2 years ago
dnorm_j
add simple test
2 years ago
docs
fix
4 years ago
tests
add simple test
2 years ago
.gitignore
add python b…
4 years ago
LICEN…
fix
4 years ago
MANIF…
fix
4 years ago
Makefile
fix
4 years ago
READ…
add simple test
2 years ago
accura…
Add files via u…
3 years ago
require…
Bump numpy …
2 years ago
setup.py
add simple test
2 years ago
日本語の病名を正規化するツールです
DNormの日本語実装になります.
Tf-idf ベースのランキング手法により病名を正規化しま
す。
About
Japanese version of DNorm
Readme
BSD-2-Clause license
Activity
Custom properties
9 stars
2 watching
2 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
3
ujiuji1259
shuntaroy Shuntaro Yada
shokowakamiya
Languages
Python 99.4%
Makefile 0.6%
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
DNorm-J
概要
手法
README
BSD-2-Clause license
詳細はリンク先の論文をご参照ください.
python >= 3.6.1
MeCab >= 0.996.5
IPA 辞書
ターミナルなどの端末アプリでコマンドラインアプリケ
ーションとして使えるほか,Python スクリプト内でラ
イブラリとして導入することが可能です.
いずれの使い方でも,初回に学習済みモデルファイルを
ローカル($HOME/.cache/Dnorm )にダウンロードしま
す. そのため,初回起動には時間がかかります.
-i:入力ファイル
-o:出力ファイル
-n:正規化先の病名リスト(デフォルト設定では指
定する必要はありません)
-d:略語展開辞書(デフォルト設定では指定する必
要はありません)
python -m dnorm_j -i sample.txt -o output.txt
環境
インストール
pip install
git+https://github.com/sociocom/DNorm-
J.git
使い方
コマンドラインからの利用
入力(sample.txt)
腸閉塞症状
高Ca尿症
二次性副腎不全
出力(output.txt)
イレウス
高カルシウム尿症
氏家翔吾(奈良先端科学技術大学院大学)
副腎クリーゼ
ライブラリとしての利用
from dnorm_j import DNorm
model = DNorm.from_pretrained()
result = model.normalize('AML')
print(result) # => '急性骨髄性白血病'
性能
コントリビュータ
|
# DNorm-J
## 概要
日本語の病名を正規化するツールです
## 手法
[DNorm](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3810844/)の日本語実装になります.
Tf-idf ベースのランキング手法により病名を正規化します。
詳細はリンク先の論文をご参照ください.
## 環境
- python >= 3.6.1
- MeCab >= 0.996.5
- IPA 辞書
## インストール
```
pip install git+https://github.com/sociocom/DNorm-J.git
```
## 使い方
ターミナルなどの端末アプリでコマンドラインアプリケーションとして使えるほか,Python スクリプト内でライブラリとして導入することが可能です.
いずれの使い方でも,初回に学習済みモデルファイルをローカル(`$HOME/.cache/Dnorm`)にダウンロードします.
そのため,初回起動には時間がかかります.
### コマンドラインからの利用
- -i:入力ファイル
- -o:出力ファイル
- -n:正規化先の病名リスト(デフォルト設定では指定する必要はありません)
- -d:略語展開辞書(デフォルト設定では指定する必要はありません)
`python -m dnorm_j -i sample.txt -o output.txt`
#### 入力(sample.txt)
```
腸閉塞症状
高Ca尿症
二次性副腎不全
```
#### 出力(output.txt)
```
イレウス
高カルシウム尿症
副腎クリーゼ
```
### ライブラリとしての利用
```python
from dnorm_j import DNorm
model = DNorm.from_pretrained()
result = model.normalize('AML')
print(result) # => '急性骨髄性白血病'
```
## 性能

## コントリビュータ
- 氏家翔吾(奈良先端科学技術大学院大学)
|
[
"Syntactic Text Processing",
"Term Extraction",
"Text Normalization"
] |
[] |
true |
https://github.com/megagonlabs/UD_Japanese-GSD
|
2020-05-16T00:37:31Z
|
Japanese data from the Google UDT 2.0.
|
megagonlabs / UD_Japanese-GSD
Public
forked from UniversalDependencies/UD_Japanese-GSD
Branches
Tags
Go to file
Go to file
Code
spacy
stanza
.gitignore
CITATION
CONTRIBUTING.md
LICENSE.txt
README.md
ene_mapping.xlsx
eval.log
ja_gsd-ud-dev.conllu
ja_gsd-ud-test.conllu
ja_gsd-ud-train.conllu
leader_board.md
stats.xml
This Universal Dependencies (UD) Japanese treebank is based on the definition of UD Japanese convention described in the UD
documentation. The original sentences are from Google UDT 2.0.
In addition, the Megagon Labs Tokyo added the files named *.ne.conllu which contain the BILUO style Named Entity gold
labels in misc field. Below files are converted from *.ne.conllu for the NLP frameworks.
ja_gsd-ud-(train|dev|test).ne.json : https://spacy.io/api/cli#train
(train|dev|test).ne.bio : https://github.com/stanfordnlp/stanza#training-your-own-neural-pipelines
The Japanese UD treebank contains the sentences from Google Universal Dependency Treebanks v2.0 (legacy):
https://github.com/ryanmcd/uni-dep-tb. First, Google UDT v2.0 was converted to UD-style with bunsetsu-based word units (say
"master" corpus).
About
Japanese data from the Google
UDT 2.0.
Readme
View license
Activity
Custom properties
28 stars
1 watching
2 forks
Report repository
Releases 6
UD Japanese GSD r2.10 …
Latest
on May 29, 2022
+ 5 releases
Packages
No packages published
Languages
Python 100.0%
Code
Pull requests
Actions
Projects
Security
Insights
Summary
Introduction
README
License
The word units in "master" is significantly different from the definition of the documents based on Short Unit Word (SUW) [1],
then the sentences are automatically re-processed by Hiroshi Kanayama in Feb 2017. It is the Japanese_UD v2.0 and used in
the CoNLL 2017 shared task. In November 2017, UD_Japanese v2.0 is merged with the "master" data so that the manual
annotations for dependencies can be reflected to the corpus. It reduced the errors in the dependency structures and relation
labels.
Still there are slight differences in the word unit between UD_Japanese v2.1 and UD_Japanese-KTC 1.3.
In May 2020, we introduce UD_Japanese BCCWJ[3] like coversion method for UD_Japanese GSD v2.6.
The data is tokenized manually in a three layered tokenization of Short Unit Word (SUW)[4], Long Unit Word (LUW)[5], and base-
phrase (bunsetsu)[5] as the `Balanced Corpus of Contemporary Written Japanese'[6]. The original morporlogical labels are based
on UniDic POS tagset [7] We use the slightly changed version of SUW as the UD word tokenization, in which the cardinal
numbers are concatenated as in one word.
The (base-)phrase level dependency structures are annotated manually with the gudeline of BCCWJ-DepPara[8]. The phrase
level dependency structures are converted into the word level dependency structures by the head rule of the dependency
analyser CaboCha[9].
LEMMA is the base form of conjugated words -- verbs, adjectives, and auxiliary verbs by the UniDic schema [7].
XPOS is the part-of-speech label for Short Unit Word (SUW) based on UniDic POS tagset [7].
SpaceAfter: manually annotated to discriminate alphanumeric word tokens
BunsetuPositionType: heads in a bunsetu by the head rules [9];
SEM_HEAD: the head content word
SYN_HEAD: the head functional word
CONT: the non-head content word
FUNC: the non-head functional word
LUWPOS: the part-of-speech label for Long Unit Word (LUW) based on UniDic POS tagset [7].
LUWBILabel: Long Unit Word (LUW) boundary labels [5]
B: Beginning of LUW
I: Inside of LUW
UniDicInfo: lemma information based on UniDic [7]. The UniDic lemma normalise not only conjugation forms but also
orthographical variants.
1 lForm: lexeme reading (語彙素読み)
2 lemma: lexeme (語彙素)
3 orth: Infinitive Form and Surface Form (書字形出現形)
4 pron: Surface Pronunciation (発音形出現形)
5 orthBase: Infinitive Form (書字形基本形)
6 pronBase: Surface Pronunciation(発音形基本形)
Specification
Overview
LEMMA field
XPOS field
MISC field
7 form: Word Form (語形)
8 formBase: Word Form (語形基本形)
The original treebank was provided by:
Adam LaMontagne
Milan Souček
Timo Järvinen
Alessandra Radici
via
Dan Zeman.
The corpus was converted by:
Mai Omura
Yusuke Miyao
Hiroshi Kanayama
Hiroshi Matsuda
through annotation, discussion and validation with
Aya Wakasa
Kayo Yamashita
Masayuki Asahara
Takaaki Tanaka
Yugo Murawaki
Yuji Matsumoto
Kaoru Ito
Taishi Chika
Shinsuke Mori
Sumire Uematsu
See file LICENSE.txt
[1] Tanaka, T., Miyao, Y., Asahara, M., Uematsu, S., Kanayama, H., Mori, S., & Matsumoto, Y. (2016). Universal Dependencies for
Japanese. In LREC.
[2] Asahara, M., Kanayama, H., Tanaka, T., Miyao, Y., Uematsu, S., Mori, S., Matsumoto, Y., Omura, M, & Murawaki, Y. (2018).
Universal Dependencies Version 2 for Japanese. In LREC.
[3] Omura, M., & Asahara, M. (2020). UD-Japanese BCCWJ: Universal Dependencies Annotation for the Balanced Corpus of
Contemporary Written Japanese. In UDW 2018.
[4] 小椋 秀樹, 小磯 花絵, 冨士池優美, 宮内 佐夜香, 小西 光, 原 裕 (2011). 『現代日本語書き言葉均衡コーパス』形態論情報規程集
第4版 (下),(LR-CCG-10-05-02), 国立国語研究所, Tokyo, Japan.
[5] 小椋 秀樹, 小磯 花絵, 冨士池優美, 宮内 佐夜香, 小西 光, 原 裕 (2011). 『現代日本語書き言葉均衡コーパス』形態論情報規程集
第4版 (上),(LR-CCG-10-05-01), 国立国語研究所, Tokyo, Japan.
Acknowledgments
License
Reference
[6] Maekawa, K., Yamazaki, M., Ogiso, T., Maruyama, T., Ogura, H., Kashino, W., Koiso, H., Yamaguchi, M., Tanaka, M., & Den,
Y. (2014). Balanced Corpus of Contemporary Written Japanese. Language Resources and Evaluation, 48(2):345-371.
[7] Den, Y., Nakamura, J., Ogiso, T., Ogura, H., (2008). A Proper Approach to Japanese Morphologican Analysis: Dictionary,
Model, and Evaluation. In LREC 2008. pp.1019-1024.
[8] Asahara, M., & Matsumoto, Y. (2016). BCCWJ-DepPara: A Syntactic Annotation Treebank on the `Balanced Corpus of
Contemporary Written Japanese'. In ALR-12.
[9] Kudo, T. & Matsumoto, Y. (2002). Japanese Dependency Analysis using Cascaded Chunking, In CoNLL 2002. pp.63-69.
[10] 松田 寛, 若狭 絢, 山下 華代, 大村 舞, 浅原 正幸 (2020). UD Japanese GSD の再整備と固有表現情報付与, 言語処理学会第26
回年次大会発表論文集
2020-05- v2.6
Update for v2.6. Introduce the conversion method of UD-Japanese BCCWJ [3]
Add the files containing the NE gold labels
2019-11-15 v2.5
Google gave permission to drop the "NC" restriction from the license. This applies to the UD annotations (not the
underlying content, of which Google claims no ownership or copyright).
2018-11- v2.3
Updates for v2.3. Errors in morphologies are fixed, and unknown words and dep labels are reduced. XPOS is added.
2017-11- v2.1
Updates for v2.1. Several errors are removed by adding PoS/label rules and merging the manual dependency
annotations in the original bunsetu-style annotations in Google UDT 2.0.
2017-03-01 v2.0
Converted to UD v2 guidelines.
2016-11-15 v1.4
Initial release in Universal Dependencies.
Changelog
===================================
Universal Dependency Treebanks v2.0
(legacy information)
===================================
=========================
Licenses and terms-of-use
=========================
For the following languages
German, Spanish, French, Indonesian, Italian, Japanese, Korean and Brazilian
Portuguese
we will distinguish between two portions of the data.
1. The underlying text for sentences that were annotated. This data Google
asserts no ownership over and no copyright over. Some or all of these
sentences may be copyrighted in some jurisdictions. Where copyrighted,
Google collected these sentences under exceptions to copyright or implied
license rights. GOOGLE MAKES THEM AVAILABLE TO YOU 'AS IS', WITHOUT ANY
WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED.
2. The annotations -- part-of-speech tags and dependency annotations. These are
made available under a CC BY-SA 4.0. GOOGLE MAKES
THEM AVAILABLE TO YOU 'AS IS', WITHOUT ANY WARRANTY OF ANY KIND, WHETHER
EXPRESS OR IMPLIED. See attached LICENSE file for the text of CC BY-NC-SA.
Portions of the German data were sampled from the CoNLL 2006 Tiger Treebank
data. Hans Uszkoreit graciously gave permission to use the underlying
sentences in this data as part of this release.
Any use of the data should reference the above plus:
Universal Dependency Annotation for Multilingual Parsing
Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg,
Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang,
Oscar Tackstrom, Claudia Bedini, Nuria Bertomeu Castello and Jungmee Lee
Proceedings of ACL 2013
=======
Contact
=======
[email protected]
[email protected]
[email protected]
See https://github.com/ryanmcd/uni-dep-tb for more details
=== Machine-readable metadata
=================================================
Data available since: UD v1.4 License: CC BY-SA 4.0
Includes text: yes Genre: news blog Lemmas: converted
from manual UPOS: converted from manual XPOS: manual
native Features: not available Relations: converted from
manual Contributors: Omura, Mai; Miyao, Yusuke;
Kanayama, Hiroshi; Matsuda, Hiroshi; Wakasa, Aya;
Yamashita, Kayo; Asahara, Masayuki; Tanaka, Takaaki;
Murawaki, Yugo; Matsumoto, Yuji; Mori, Shinsuke;
Uematsu, Sumire; McDonald, Ryan; Nivre, Joakim; Zeman,
Daniel Contributing: here Contact: [email protected]
|
# Summary
This Universal Dependencies (UD) Japanese treebank is based on the definition of UD Japanese convention described in the UD documentation. The original sentences are from Google UDT 2.0.
In addition, the Megagon Labs Tokyo added the files named `*.ne.conllu` which contain the BILUO style Named Entity gold labels in misc field.
Below files are converted from `*.ne.conllu` for the NLP frameworks.
- `ja_gsd-ud-(train|dev|test).ne.json`: https://spacy.io/api/cli#train
- `(train|dev|test).ne.bio`: https://github.com/stanfordnlp/stanza#training-your-own-neural-pipelines
# Introduction
The Japanese UD treebank contains the sentences from Google Universal Dependency Treebanks v2.0 (legacy): https://github.com/ryanmcd/uni-dep-tb. First, Google UDT v2.0 was converted to UD-style with bunsetsu-based word units (say "master" corpus).
The word units in "master" is significantly different from the definition of the documents based on **Short Unit Word** (SUW) [1], then the sentences are automatically re-processed by Hiroshi Kanayama in Feb 2017. It is the Japanese_UD v2.0 and used in the CoNLL 2017 shared task.
In November 2017, UD_Japanese v2.0 is merged with the "master" data so that the manual annotations for dependencies can be reflected to the corpus. It reduced the errors in the dependency structures and relation labels.
Still there are slight differences in the word unit between UD_Japanese v2.1 and UD_Japanese-KTC 1.3.
In May 2020, we introduce UD_Japanese BCCWJ[3] like coversion method for UD_Japanese GSD v2.6.
# Specification
## Overview
The data is tokenized manually in a three layered tokenization of Short Unit Word (SUW)[4], Long Unit Word (LUW)[5], and base-phrase (bunsetsu)[5] as the `Balanced Corpus of Contemporary Written Japanese'[6]. The original morporlogical labels are based on UniDic POS tagset [7]
We use the slightly changed version of SUW as the UD word tokenization, in which the cardinal numbers are concatenated as in one word.
The (base-)phrase level dependency structures are annotated manually with the gudeline of BCCWJ-DepPara[8]. The phrase level dependency structures are converted into the word level dependency structures by the head rule of the dependency analyser CaboCha[9].
## LEMMA field
LEMMA is the base form of conjugated words -- verbs, adjectives, and auxiliary verbs by the UniDic schema [7].
## XPOS field
XPOS is the part-of-speech label for Short Unit Word (SUW) based on UniDic POS tagset [7].
## MISC field
- SpaceAfter: manually annotated to discriminate alphanumeric word tokens
- BunsetuPositionType: heads in a bunsetu by the head rules [9];
- SEM_HEAD: the head content word
- SYN_HEAD: the head functional word
- CONT: the non-head content word
- FUNC: the non-head functional word
- LUWPOS: the part-of-speech label for Long Unit Word (LUW) based on UniDic POS tagset [7].
- LUWBILabel: Long Unit Word (LUW) boundary labels [5]
- B: Beginning of LUW
- I: Inside of LUW
- UniDicInfo: lemma information based on UniDic [7]. The UniDic lemma normalise
not only conjugation forms but also orthographical variants.
- 1 lForm: lexeme reading (語彙素読み)
- 2 lemma: lexeme (語彙素)
- 3 orth: Infinitive Form and Surface Form (書字形出現形)
- 4 pron: Surface Pronunciation (発音形出現形)
- 5 orthBase: Infinitive Form (書字形基本形)
- 6 pronBase: Surface Pronunciation(発音形基本形)
- 7 form: Word Form (語形)
- 8 formBase: Word Form (語形基本形)
# Acknowledgments
The original treebank was provided by:
- Adam LaMontagne
- Milan Souček
- Timo Järvinen
- Alessandra Radici
via
- Dan Zeman.
The corpus was converted by:
- Mai Omura
- Yusuke Miyao
- Hiroshi Kanayama
- Hiroshi Matsuda
through annotation, discussion and validation with
- Aya Wakasa
- Kayo Yamashita
- Masayuki Asahara
- Takaaki Tanaka
- Yugo Murawaki
- Yuji Matsumoto
- Kaoru Ito
- Taishi Chika
- Shinsuke Mori
- Sumire Uematsu
# License
See file LICENSE.txt
# Reference
[1] Tanaka, T., Miyao, Y., Asahara, M., Uematsu, S., Kanayama, H., Mori, S., & Matsumoto, Y. (2016). Universal Dependencies for Japanese. In LREC.
[2] Asahara, M., Kanayama, H., Tanaka, T., Miyao, Y., Uematsu, S., Mori, S., Matsumoto, Y., Omura, M, & Murawaki, Y. (2018). Universal Dependencies Version 2 for Japanese. In LREC.
[3] Omura, M., & Asahara, M. (2020). UD-Japanese BCCWJ: Universal Dependencies Annotation for the Balanced Corpus of Contemporary Written Japanese. In UDW 2018.
[4] 小椋 秀樹, 小磯 花絵, 冨士池優美, 宮内 佐夜香, 小西 光, 原 裕 (2011).
『現代日本語書き言葉均衡コーパス』形態論情報規程集 第4版 (下),(LR-CCG-10-05-02), 国立国語研究所, Tokyo, Japan.
[5] 小椋 秀樹, 小磯 花絵, 冨士池優美, 宮内 佐夜香, 小西 光, 原 裕 (2011).
『現代日本語書き言葉均衡コーパス』形態論情報規程集 第4版 (上),(LR-CCG-10-05-01), 国立国語研究所, Tokyo, Japan.
[6] Maekawa, K., Yamazaki, M., Ogiso, T., Maruyama, T., Ogura, H., Kashino, W., Koiso, H., Yamaguchi, M., Tanaka, M., & Den, Y. (2014). Balanced Corpus of Contemporary Written Japanese. Language Resources and Evaluation, 48(2):345-371.
[7] Den, Y., Nakamura, J., Ogiso, T., Ogura, H., (2008). A Proper Approach to Japanese Morphologican Analysis: Dictionary, Model, and Evaluation. In LREC 2008. pp.1019-1024.
[8] Asahara, M., & Matsumoto, Y. (2016). BCCWJ-DepPara: A Syntactic Annotation Treebank on the `Balanced Corpus of Contemporary Written Japanese'. In ALR-12.
[9] Kudo, T. & Matsumoto, Y. (2002). Japanese Dependency Analysis using Cascaded Chunking, In CoNLL 2002. pp.63-69.
[10] 松田 寛, 若狭 絢, 山下 華代, 大村 舞, 浅原 正幸 (2020).
UD Japanese GSD の再整備と固有表現情報付与, 言語処理学会第26回年次大会発表論文集
# Changelog
* 2020-05- v2.6
* Update for v2.6. Introduce the conversion method of UD-Japanese BCCWJ [3]
* Add the files containing the NE gold labels
* 2019-11-15 v2.5
* Google gave permission to drop the "NC" restriction from the license.
This applies to the UD annotations (not the underlying content, of which Google claims no ownership or copyright).
* 2018-11- v2.3
* Updates for v2.3. Errors in morphologies are fixed, and unknown words and dep labels are reduced. XPOS is added.
* 2017-11- v2.1
* Updates for v2.1. Several errors are removed by adding PoS/label rules and merging the manual dependency annotations in the original bunsetu-style annotations in Google UDT 2.0.
* 2017-03-01 v2.0
* Converted to UD v2 guidelines.
* 2016-11-15 v1.4
* Initial release in Universal Dependencies.
```
===================================
Universal Dependency Treebanks v2.0
(legacy information)
===================================
=========================
Licenses and terms-of-use
=========================
For the following languages
German, Spanish, French, Indonesian, Italian, Japanese, Korean and Brazilian
Portuguese
we will distinguish between two portions of the data.
1. The underlying text for sentences that were annotated. This data Google
asserts no ownership over and no copyright over. Some or all of these
sentences may be copyrighted in some jurisdictions. Where copyrighted,
Google collected these sentences under exceptions to copyright or implied
license rights. GOOGLE MAKES THEM AVAILABLE TO YOU 'AS IS', WITHOUT ANY
WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED.
2. The annotations -- part-of-speech tags and dependency annotations. These are
made available under a CC BY-SA 4.0. GOOGLE MAKES
THEM AVAILABLE TO YOU 'AS IS', WITHOUT ANY WARRANTY OF ANY KIND, WHETHER
EXPRESS OR IMPLIED. See attached LICENSE file for the text of CC BY-NC-SA.
Portions of the German data were sampled from the CoNLL 2006 Tiger Treebank
data. Hans Uszkoreit graciously gave permission to use the underlying
sentences in this data as part of this release.
Any use of the data should reference the above plus:
Universal Dependency Annotation for Multilingual Parsing
Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg,
Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang,
Oscar Tackstrom, Claudia Bedini, Nuria Bertomeu Castello and Jungmee Lee
Proceedings of ACL 2013
=======
Contact
=======
[email protected]
[email protected]
[email protected]
See https://github.com/ryanmcd/uni-dep-tb for more details
```
=== Machine-readable metadata =================================================
Data available since: UD v1.4
License: CC BY-SA 4.0
Includes text: yes
Genre: news blog
Lemmas: converted from manual
UPOS: converted from manual
XPOS: manual native
Features: not available
Relations: converted from manual
Contributors: Omura, Mai; Miyao, Yusuke; Kanayama, Hiroshi; Matsuda, Hiroshi; Wakasa, Aya; Yamashita, Kayo; Asahara, Masayuki; Tanaka, Takaaki; Murawaki, Yugo; Matsumoto, Yuji; Mori, Shinsuke; Uematsu, Sumire; McDonald, Ryan; Nivre, Joakim; Zeman, Daniel
Contributing: here
Contact: [email protected]
===============================================================================
(Original treebank contributors: LaMontagne, Adam; Souček, Milan; Järvinen, Timo; Radici, Alessandra)
|
[
"Cross-Lingual Transfer",
"Multilinguality",
"Syntactic Parsing",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/Katsumata420/wikihow_japanese
|
2020-05-28T08:07:59Z
|
wikiHow dataset (Japanese version)
|
Katsumata420 / wikihow_japanese
Public
Branches
Tags
Go to file
Go to file
Code
data
script
READ…
crawl_…
get.sh
require…
scrape…
This dataset is based on a paper, which describes wikiHow
English summarization dataset.
This dataset is crawled from Japanese wikiHow for Japanese
summarization dataset.
Python3
pip install -r requirements.txt
For quick start, run the script bash get.sh .
The train/dev/test json files are made in data/output .
In detail, you run the following steps.
About
No description, website, or topics
provided.
Readme
Activity
36 stars
3 watching
2 forks
Report repository
Releases 2
wikiHow Japanese v1.1
Latest
on Jan 1, 2021
+ 1 release
Packages
No packages published
Languages
HTML 96.7%
Python 3.1%
Shell 0.2%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
wikiHow dataset (Japanese
version)
Requirements
How to get the dataset
README
1. bash crawl_article.sh
Crawling each article from the url addresses in data/urls .
2. bash scrape2jsonl.sh
Extract knowledge from the html files crawled in step 1.
the extracted info is described in below
#json_description.
3. python script/make_data.py
Make train/dev/test data from the json files extracted in step
2 based on data/divided_data.tsv .
The detail of the json files is below section.
Howtojson
key
value
meta_title
html meta title text
num_part
the total part number in the article
original_title
title text
part_name_exist
exist the part title or not
contents
list of part (number is the same as num_part)
- part_title
part title
- part_contents
list of the each step content in the part
-- article
the article text in the step
-- bold_line
the bold line in the step
train/dev/test.json
key
value
src
source text
tgt
target text; bold lines in the article
title
article title + current part number
English wikiHow summarization dataset:
https://github.com/mahnazkoupaee/WikiHow-Dataset
json description
Related repository
The articles are provided by wikiHow. Content on wikiHow can be
shared under a Creative Commons License (CC-BY-NC-SA).
License
|
# wikiHow dataset (Japanese version)
- This dataset is based on [a paper](https://arxiv.org/abs/1810.09305), which describes wikiHow English summarization dataset.
- This dataset is crawled from [Japanese wikiHow](https://www.wikihow.jp/%E3%83%A1%E3%82%A4%E3%83%B3%E3%83%9A%E3%83%BC%E3%82%B8) for Japanese summarization dataset.
## Requirements
- Python3
- `pip install -r requirements.txt`
## How to get the dataset
- For quick start, run the script `bash get.sh`.
- The train/dev/test json files are made in `data/output`.
- In detail, you run the following steps.
1. `bash crawl_article.sh`
- Crawling each article from the url addresses in `data/urls`.
2. `bash scrape2jsonl.sh`
- Extract knowledge from the html files crawled in step 1.
- the extracted info is described in below #json_description.
3. `python script/make_data.py`
- Make train/dev/test data from the json files extracted in step 2 based on `data/divided_data.tsv`.
- The detail of the json files is below section.
### json description
- Howtojson
| key | value |
| :---|:--- |
| meta_title | html meta title text |
| num_part | the total part number in the article |
| original_title | title text |
| part_name_exist | exist the part title or not |
| contents | list of part (number is the same as num_part)|
| - part_title | part title |
| - part_contents | list of the each step content in the part|
| -- article | the article text in the step |
| -- bold_line | the bold line in the step |
- train/dev/test.json
| key | value |
| :---|:--- |
| src | source text |
| tgt | target text; bold lines in the article |
| title | article title + current part number |
## Related repository
- English wikiHow summarization dataset: https://github.com/mahnazkoupaee/WikiHow-Dataset
## License
The articles are provided by wikiHow.
Content on wikiHow can be shared under a Creative Commons License (CC-BY-NC-SA).
|
[
"Information Extraction & Text Mining",
"Summarization",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/echamudi/japanese-toolkit
|
2020-07-09T02:58:17Z
|
Monorepo for Kanji, Furigana, Japanese DB, and others
|
echamudi / japanese-toolkit
Public
57 Branches
41 Tags
Go to file
Go to file
Code
.github/workflows
images
packages
.eslintrc.json
.gitignore
README.md
lerna.json
package-lock.json
package.json
structure.md
Monorepo for Kanji, Furigana, Japanese DB, and others.
Japanese Toolkit JS
Japanese Toolkit JS
no status
no status
Name
Description
NPM
Kanji
Get kanji readings, kanji composition trees, and groupings.
Open readme - Try now in Repl.it!
kanji
downloads
downloads 500/month
500/month
Furigana
Fit kana text into Japanese writing.
Open readme - Try now in Repl.it!
furigana
downloads
downloads 827/month
827/month
Kyarakuta
Categorize and manipulate characters.
Open readme - Try now in Repl.it!
kyarakuta
downloads
downloads 841/month
841/month
About
Monorepo for Kanji, Furigana,
Japanese DB, and others
Readme
Activity
45 stars
4 watching
3 forks
Report repository
Releases
41 tags
Packages
No packages published
Languages
JavaScript 70.4%
TypeScript 29.6%
Code
Issues
2
Pull requests
55
Actions
Projects
Security
Insights
Japanese Toolkit JS
Package List
README
Name
Description
NPM
Japanese-DB
Generate Japanese dictionary SQLite database from open source materials.
Open readme
japanese-db
downloads
downloads 67/month
67/month
Goi
⏳ Work in Progress
Japanese words data (writings, readings, meaning, etc)
goi
Bunpou
⏳ Work in Progress
Japanese word conjugator
bunpou
Copyright © 2020 Ezzat Chamudi
Japanese Toolkit code is licensed under MPL-2.0. Images, logos, docs, and articles in this project are released under
CC-BY-SA-4.0.
Some packages have their own acknowledgement list and additional license notices. Please refer to the readme file of
each package
License
|
# Japanese Toolkit JS
<p align="center">
<a href="https://github.com/echamudi/japanese-toolkit/">
<img src="https://raw.githubusercontent.com/echamudi/japanese-toolkit/master/images/japanese-toolkit.svg" alt="Japanese Toolkit Logo" width="160" height="128">
</a>
</p>
Monorepo for Kanji, Furigana, Japanese DB, and others.
<img src="https://github.com/echamudi/japanese-toolkit/workflows/Japanese%20Toolkit%20JS/badge.svg">
## Package List
| Name | Description | NPM |
| - | - | - |
| Kanji | Get kanji readings, kanji composition trees, and groupings.<br> [Open readme](https://github.com/echamudi/japanese-toolkit/tree/master/packages/kanji) - [Try now in Repl.it!](https://repl.it/@echamudi/demo-kanji) | [kanji](https://www.npmjs.com/package/kanji) <br> <a href="https://www.npmjs.com/package/kanji"><img alt="npm" src="https://img.shields.io/npm/dm/kanji"></a> |
| Furigana | Fit kana text into Japanese writing.<br> [Open readme](https://github.com/echamudi/japanese-toolkit/tree/master/packages/furigana) - [Try now in Repl.it!](https://repl.it/@echamudi/demo-furigana) | [furigana](https://www.npmjs.com/package/furigana) <br> <a href="https://www.npmjs.com/package/furigana"><img alt="npm" src="https://img.shields.io/npm/dm/furigana"></a> |
| Kyarakuta | Categorize and manipulate characters.<br> [Open readme](https://github.com/echamudi/japanese-toolkit/tree/master/packages/kyarakuta) - [Try now in Repl.it!](https://repl.it/@echamudi/demo-kyarakuta) | [kyarakuta](https://www.npmjs.com/package/kyarakuta) <br> <a href="https://www.npmjs.com/package/kyarakuta"><img alt="npm" src="https://img.shields.io/npm/dm/kyarakuta"></a> |
| Japanese-DB | Generate Japanese dictionary SQLite database from open source materials.<br> [Open readme](https://github.com/echamudi/japanese-toolkit/tree/master/packages/japanese-db) | [japanese-db](https://www.npmjs.com/package/japanese-db) <br> <a href="https://www.npmjs.com/package/japanese-db"><img alt="npm" src="https://img.shields.io/npm/dm/japanese-db"></a> |
| Goi | ⏳ *Work in Progress* <br> Japanese words data (writings, readings, meaning, etc) | [goi](https://www.npmjs.com/package/goi) |
| Bunpou | ⏳ *Work in Progress* <br> Japanese word conjugator | [bunpou](https://www.npmjs.com/package/bunpou) |
## License
Copyright © 2020 [Ezzat Chamudi](https://github.com/echamudi)
Japanese Toolkit code is licensed under [MPL-2.0](https://www.mozilla.org/en-US/MPL/2.0/). Images, logos, docs, and articles in this project are released under [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode).
Some packages have their own acknowledgement list and additional license notices. Please refer to the readme file of each package.
|
[
"Syntactic Text Processing",
"Text Normalization"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/ids-cv/wrime
|
2020-08-18T05:13:42Z
|
WRIME: 主観と客観の感情分析データセット
|
ids-cv / wrime
Public
Branches
Tags
Go to file
Go to file
Code
LICEN…
READ…
READ…
wrime-…
wrime-…
日本語の感情分析の研究のために、以下の特徴を持つデータセットを構築しました。
主観(テキストの筆者1人)と客観(クラウドワーカ3人)の両方の立場から感情ラベ
ルを付与しました。
Plutchikの基本8感情(喜び、悲しみ、期待、驚き、怒り、恐れ、嫌悪、信頼)を扱い
ました。
各感情の強度を4段階(0:無、1:弱、2:中、3:強)でラベル付けしました。
Ver.2では、感情極性(-2:強いネガティブ、-1:ネガティブ、0:ニュートラル、1:ポジテ
ィブ、2:強いポジティブ)も追加しました。
@shunk031 さんが本データセットを HuggingFace Datasets Hub に登録してください
ました。
Ver.2: 60人の筆者から収集した35,000件の投稿(Ver.1のサブセット)に感情極性を追
加でラベル付けしました。
Ver.1: 80人の筆者から収集した43,200件の投稿に感情強度をラベル付けしました。
About
No description, website, or topics
provided.
Readme
View license
Activity
Custom properties
145 stars
3 watching
9 forks
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
moguranosenshi 梶原智之
nakatuba Tsubasa Nakagawa
Code
Issues
2
Pull requests
1
Actions
Projects
Security
Insights
WRIME: 主観と客観の感情分析データセット
[English]
更新履歴
README
License
投稿:車のタイヤがパンクしてた。。いたずらの可能性が高いんだって。。
喜び
悲しみ
期待
驚き
怒り
恐れ
嫌悪
信頼
感情極性
主観
0
3
0
1
3
0
0
0
0
客観A
0
3
0
3
1
2
1
0
-1
客観B
0
2
0
2
0
0
0
0
-1
客観C
0
2
0
2
0
1
1
0
-2
近藤里咲, 寺面杏優, 梶川怜恩, 堀口航輝, 梶原智之, 二宮崇, 早志英朗, 中島悠太, 長原
一. テキスト正規化による日本語感情分析の性能改善. 人工知能学会第38回全国大会,
2024.
鈴木陽也, 山内洋輝, 梶原智之, 二宮崇, 早志英朗, 中島悠太, 長原一. 書き手の複数投稿
を用いた感情分析. 人工知能学会第38回全国大会, 2024.
近藤里咲, 大塚琢生, 梶原智之, 二宮崇, 早志英朗, 中島悠太, 長原一. 大規模言語モデル
による日本語感情分析の性能評価. 情報処理学会第86回全国大会, pp.859-860, 2024.
Haruya Suzuki, Sora Tarumoto, Tomoyuki Kajiwara, Takashi Ninomiya, Yuta
Nakashima, Hajime Nagahara. Emotional Intensity Estimation based on Writer’s
Personality. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the
Association for Computational Linguistics and the 12th International Joint Conference
on Natural Language Processing: Student Research Workshop (AACL-SRW 2022),
pp.1-7, 2022.
鈴木陽也, 秋山和輝, 梶原智之, 二宮崇, 武村紀子, 中島悠太, 長原一. 書き手の性格情報
を用いた感情強度推定. 人工知能学会第36回全国大会, 2022.
Haruya Suzuki, Yuto Miyauchi, Kazuki Akiyama, Tomoyuki Kajiwara, Takashi Ninomiya,
Noriko Takemura, Yuta Nakashima, Hajime Nagahara. A Japanese Dataset for
Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain.
In Proceedings of the 13th International Conference on Language Resources and
Evaluation (LREC 2022), pp.7022-7028, 2022.
宮内裕人, 鈴木陽也, 秋山和輝, 梶原智之, 二宮崇, 武村紀子, 中島悠太, 長原一. 主観と客
観の感情極性分類のための日本語データセット. 言語処理学会第28回年次大会,
pp.1495-1499, 2022.
Tomoyuki Kajiwara, Chenhui Chu, Noriko Takemura, Yuta Nakashima, Hajime
Nagahara. WRIME: A New Dataset for Emotional Intensity Estimation with
Subjective and Objective Annotations. In Proceedings of the 2021 Annual
Conference of the North American Chapter of the Association for Computational
Linguistics (NAACL 2021), pp.2095-2104, 2021.
梶原智之, Chenhui Chu, 武村紀子, 中島悠太, 長原一. 主観感情と客観感情の強度推定の
ための日本語データセット. 言語処理学会第27回年次大会, pp.523-527, 2021.
テキストとラベルの例
文献情報
本データセットを研究で利用された場合、論文情報をご連絡いただきましたらここに掲載
させていただきます。
本研究は、文部科学省によるSociety 5.0 実現化研究拠点支援事業(グラント番号:
JPMXP0518071489)の助成を受けたものです。
CC BY-NC-ND 4.0
梶原 智之(愛媛大学 大学院理工学研究科 講師)
中島 悠太(大阪大学 D3センター 教授)
sentiment-dataset at is.ids.osaka-u.ac.jp
謝辞
ライセンス
連絡先
|
# WRIME: Dataset for Emotional Intensity Estimation
We annotated SNS posts with emotional intensity to construct a Japanese emotion analysis dataset.
- We provide both subjective (i.e. based on what the writer feels) and objective (i.e. based on what humans and other machines think that the writer feels) annotations.
- Annotations follow Plutchik’s 8-category emotion schema on a 4-point intensity scale (0:no, 1:weak, 2:medium, and 3:strong).
- In Ver.2, we also annotate sentiment polarity (-2: Strong Negative, -1: Negative, 0: Neutral, 1: Positive, 2: Strong Positive).
## Change Log
- Thanks to [@shunk031](https://github.com/shunk031), WRIME is now available on the [HuggingFace Datasets Hub](https://huggingface.co/datasets/shunk031/wrime).
- Ver.2: We annotate 35,000 Japanese posts from 60 crowdsourced workers (a subset of Ver.1) with both emotional intensity and sentiment polarity.
- Ver.1: We annotate 43,200 Japanese posts from 80 crowdsourced workers with emotional intensity.
## Examples
Text: 車のタイヤがパンクしてた。。いたずらの可能性が高いんだって。。<br>
(The tire of my car was flat. I heard that it might be mischief.)
||Joy|Sadness|Anticipation|Surprise|Anger|Fear|Disgust|Trust|Sentiment Polarity|
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Writer |0|3|0|1|3|0|0|0|0|
|Reader 1|0|3|0|3|1|2|1|0|-1|
|Reader 2|0|2|0|2|0|0|0|0|-1|
|Reader 3|0|2|0|2|0|1|1|0|-2|
## Research with WRIME
A list of known publications that use WRIME is shown below.
If you know more, please let us know.
- Haruya Suzuki, Sora Tarumoto, Tomoyuki Kajiwara, Takashi Ninomiya, Yuta Nakashima, Hajime Nagahara. **[Emotional Intensity Estimation based on Writer’s Personality.](https://aclanthology.org/2022.aacl-srw.1/)** In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop (AACL-SRW 2022), pp.1-7, 2022.
- Haruya Suzuki, Yuto Miyauchi, Kazuki Akiyama, Tomoyuki Kajiwara, Takashi Ninomiya, Noriko Takemura, Yuta Nakashima, Hajime Nagahara. **[A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain.](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.759.pdf)** In Proceedings of the 13th International Conference on Language Resources and Evaluation (LREC 2022), pp.7022-7028, 2022.
- Tomoyuki Kajiwara, Chenhui Chu, Noriko Takemura, Yuta Nakashima, Hajime Nagahara. **[WRIME: A New Dataset for Emotional Intensity Estimation with Subjective and Objective Annotations.](https://aclanthology.org/2021.naacl-main.169/)** In Proceedings of the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021), pp.2095-2104, 2021.
## Acknowledgments
This work was supported by [Innovation Platform for Society 5.0](https://www.ids.osaka-u.ac.jp/ildi/en/index.html) from Japan Ministry of Education, Culture, Sports, Science and Technology.
## Licence
[CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/)
## Authors
- [Tomoyuki Kajiwara](http://moguranosenshi.sakura.ne.jp/cv.pdf) (Senior Assistant Professor, Graduate School of Science and Engineering, Ehime University, Japan)
- [Yuta Nakashima](https://www.n-yuta.jp/) (Professor, D3 Center, Osaka University, Japan)
sentiment-dataset *at* is.ids.osaka-u.ac.jp
|
[
"Emotion Analysis",
"Sentiment Analysis"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/tokuhirom/jawiki-kana-kanji-dict
|
2020-08-23T14:21:04Z
|
Generate SKK/MeCab dictionary from Wikipedia(Japanese edition)
|
tokuhirom / jawiki-kana-kanji-dict
Public
2 Branches
0 Tags
Go to file
Go to file
Code
renovate[bot] Update dependency python-Levenshtein to v0.26.1
69f3368 · 2 weeks ago
.github/workflows
bin
dat
jawiki
logs
tests
.gitattributes
.gitignore
Makefile
README.md
SKK-JISYO.jawiki
check.py
lindera-userdic.csv
mecab-userdic.csv
pytest.ini
renovate.json
requirements.txt
setup.py
user_simpledic.csv
wikipedia 日本語版のデータを元に、SKK/MeCab の辞書をつくるスクリプトです。
github actions で wikipedia から定期的にデータを取得して https://github.com/tokuhirom/skk-jisyo-jawiki/blob/master/SKK-JISYO.jawiki を、定
期的に更新するようにしています。 (github actions を利用することで、メンテナが何もしなくても自動的に更新されることを期待していま
す。)
python 3.4+
jaconv
pytests
About
Generate SKK/MeCab dictionary
from Wikipedia(Japanese edition)
# nlp # wikipedia # japanese-language
Readme
Activity
51 stars
6 watching
2 forks
Report repository
Contributors
3
tokuhirom Tokuhiro Matsuno
renovate[bot]
github-actions[bot]
Languages
Python 98.7%
Makefile 1.3%
Code
Issues
2
Pull requests
Actions
Projects
Security
Insights
jawiki-kana-kanji-dict
これは何?
Requirements
README
Levenshtein
janome
bunzip2
gnu make
gnu grep
check.py に条件を追加します(これにより、デグレしづらくなります)
手元で make check を実行して、実行失敗することを確認します。
grep 対象ワード dat/* として、対象のワードの状態をみます。
user_simpledic.csv か jawiki/*.py のルールを変更します
.github/workflows/python-app.yml が github actions の定義ファイルです。これにより、定期的に辞書ファイルが再生成されます。
処理のフローは以下の通りです。試行錯誤/途中のステップのバグ発見しやすいように、複数ステップに分割されています。
make を実行すれば、一通りファイルが実行されます。
make check を実行すると、辞書ファイルを生成し、辞書ファイルの正当性を確認します。
make test を実行すると、テストスイートを実行します。
Python scripts are:
How to contribute
どう動いているのか
LICENSE
The MIT License (MIT)
Copyright © 2020 Tokuhiro Matsuno, http://64p.org/ <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights
Wikipedia license is:
https://ja.wikipedia.org/wiki/Wikipedia:%E3%82%A6%E3%82%A3%E3%82%AD%E3%83%9A%E3%83%87%E3%82%A3%E3%82%A2%E3
%82%92%E4%BA%8C%E6%AC%A1%E5%88%A9%E7%94%A8%E3%81%99%E3%82%8B
自動生成されたファイルは Wikipedia のコンテンツに対する軽微利用であると私は考えます。 よって、生成ファイルは Python script と同様
の MIT License で配布します。
Wikipedia 日本語版がなければ、(もちろん)このプロジェクトはありえませんでしたし、今後も継続的に利用可能な新語辞書を作れるのは、
wikipedia あってこそです。 Wikipedia 日本語版の貢献者各位に感謝いたします。
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
謝辞
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
|
true |
https://github.com/poyo46/ginza-examples
|
2020-08-24T16:42:23Z
|
日本語NLPライブラリGiNZAのすゝめ
|
poyo46 / ginza-examples
Public
Branches
Tags
Go to file
Go to file
Code
.github/workflows
dev
examples
tests
.gitignore
.replit
LICENSE
README.md
poetry.lock
pyproject.toml
本記事は、日本語の自然言語処理ライブラリである GiNZA の紹介記事です。 Qiitaの記事 と GiNZA examples - GitHub の二箇所に同じ
ものを公開しています。
記事を書いた経緯
想定する読者
自然言語処理ってどんなことができるの?という初学者の方
筆者もまだまだ初学者ですが GiNZA は簡単にすごいことができるのでぜひ見ていってください。
Pythonを学びたての方やこれから学ぼうとしている方
Python学習のモチベーションになると思います。
MeCab などの形態素解析器を使ったことはあるが GiNZA は初めて聞いたという方
簡単に比較できるものではありませんが新たに GiNZA を知る価値はあると思います。
GiNZA は 日本語の 自然言語処理ライブラリです。 もともと spaCy という自然言語処理のフレームワークがあり、英語など主要な言語
に対応していました。 GiNZA は言わば spaCy の日本語対応版です。 詳細については GiNZAの公開ページ をご覧ください。
About
日本語NLPライブラリGiNZAのすゝ
め
# python # nlp # japanese # spacy # mecab
# ginza
Readme
MIT license
Activity
15 stars
1 watching
3 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
日本語NLPライブラリGiNZAのすゝめ
この記事について
GiNZAとは
README
MIT license
GiNZAを選ぶ理由
ここで紹介するコードは GitHubホストランナーの仮想環境 のubuntu-latest, macos-latestとPython 3.6, 3.7, 3.8の組み合わせ(計6通り)
で動作検証しています。
動作検証結果
TestGinzaExamples (GiNZA Examples 本体)
TestOther (リンク切れのチェックなど)
オンライン
Pythonに親しみのない方や手っ取り早く動作環境がほしい方向けにオンラインの実行環境を用意しています。ブラウザで GiNZA
examples - Repl.it を開いて Run をクリックしてください。セットアップ完了までに5分程度かかります。
ローカル環境
もし poetry が未導入であれば $ pip install poetry でインストールしてください。
ソースコードを開く
ソースコードをGitHubで見る
実行
結果
i
text
lemma_
reading_form
pos_
tag_
inflection
ent_type_
0
田中
田中
タナカ
PROPN
名詞-固有名詞-人名-姓
Person
1
部長
部長
ブチョウ
NOUN
名詞-普通名詞-一般
Position_Vocation
2
に
に
ニ
ADP
助詞-格助詞
3
伝え
伝える
ツタエ
VERB
動詞-一般
下一段-ア行,連用形-一般
4
て
て
テ
SCONJ
助詞-接続助詞
5
ください
くださる
クダサイ
AUX
動詞-非自立可能
五段-ラ行,命令形
6
。
。
。
PUNCT
補助記号-句点
※結果は見やすいように加工しています。
説明を開く
応用 この解析結果を使って例えば次のようなことができます。
文中に含まれる単語から動詞と形容詞の原形を抽出する。
文中に含まれる食べ物を抽出してカウントする。
文中の個人情報をマスキングする。
ソースコードを開く
GiNZAを動かす
セットアップ
git clone https://github.com/poyo46/ginza-examples.git
cd ginza-examples
poetry install
形態素解析
python examples/token_information.py 田中部長に伝えてください。
テキストを文のリストに分割する
ソースコードをGitHubで見る
実行
結果
説明を開く
ソースコードを開く
ソースコードをGitHubで見る
実行
結果
あの
DET
ラーメン
NOUN
屋
NOUN
に
ADP
は
ADP
よく
ADV
⾏く。
AUX
de
t
c
o
m
p
o
u
n
d
obl
c
a
s
e
ca
se
a
d
v
m
o
d
ブラウザで http://localhost:5000 を開くと解析結果が表示されます。同時に、サンプルコードでは画像を examples/displacy.svg に保存し
ています。
LexRankアルゴリズムを用いて抽出型要約を実行します。抽出型要約とは、元の文から重要な文を(無加工で)抽出するものです。サン
プル文として 『走れメロス』 を用意しました。
ソースコードを開く
ソースコードをGitHubで見る
実行
結果
python examples/split_text.py はい、そうです。ありがとうございますよろしくお願いします。
はい、そうです。
ありがとうございます
よろしくお願いします。
依存構造解析・可視化
python examples/displacy.py あのラーメン屋にはよく行く。
Using the 'dep' visualizer
Serving on http://0.0.0.0:5000 ...
文章要約
python examples/lexrank_summary.py examples/data/run_melos.txt 15
人を、信ずる事が出来ぬ、というのです。
三日のうちに、私は村で結婚式を挙げさせ、必ず、ここへ帰って来ます。
LexRankアルゴリズムによって抽出された、重要度の高い上位 15 文です。重要度のスコアは一度だけ計算すればよいため、抽出する文
の数を変更したい場合は lexrank_scoring の結果を再利用すると速いです。
ソースコードを開く
ソースコードをGitHubで見る
実行
結果
今日はとても良い天気です。
昨日の天気は大雨だったのに。
ラーメンを食べました。
今日はとても良い天気です。
1.0
0.9085916084662856
0.7043564497093551
昨日の天気は大雨だったのに。
0.9085916084662856
1.0
0.7341796340817486
ラーメンを食べました。
0.7043564497093551
0.7341796340817486
1.0
※結果は見やすいように加工しています。
説明を開く
GiNZA そのものは MIT License で利用できます。詳しくは ライセンス条項 をご覧ください。
筆者の Qiitaの記事 および GiNZA examples - GitHub も同様に MIT License で利用できます。
GiNZA を利用できる言語はPythonのみです。しかしフレームワークである spaCy にはJavaScript版やR版など Python以外の言語で
の実装 があるため、すごい人たちが移植してくれることを期待します。
単語のネガティブ・ポジティブを数値化する Token.sentiment は現時点で実装されていませんが、 GiNZA 開発者の方から直々にコ
メントをいただき、今後実装を計画していただけるとのことです。
ご意見・ご要望などは随時受け付けています。 Qiitaの記事 へコメント、または GitHubのIssues へ投稿をお願いします。
そうして身代りの男を、三日目に殺してやるのも気味がいい。
ものも言いたくなくなった。
そうして、少し事情があるから、結婚式を明日にしてくれ、と頼んだ。
あれが沈んでしまわぬうちに、王城に行き着くことが出来なかったら、あの佳い友達が、私のために死ぬのです。
何をするのだ。
けれども、今になってみると、私は王の言うままになっている。
王は、ひとり合点して私を笑い、そうして事も無く私を放免するだろう。
私を、待っている人があるのだ。
死んでお詫び、などと気のいい事は言って居られぬ。
メロス。
その人を殺してはならぬ。
メロスが帰って来た。
メロスだ。
文の類似度
python examples/similarity.py 今日はとても良い天気です。 昨日の天気は大雨だったのに。 ラーメンを食べました。
ライセンス
GiNZA
GiNZA examples
注意事項
ご意見・ご要望など
|
# 日本語NLPライブラリGiNZAのすゝめ
## この記事について
本記事は、日本語の自然言語処理ライブラリである [GiNZA](https://github.com/megagonlabs/ginza) の紹介記事です。
[Qiitaの記事](https://qiita.com/poyo46/items/7a4965455a8a2b2d2971) と [GiNZA examples - GitHub](https://github.com/poyo46/ginza-examples) の二箇所に同じものを公開しています。
<details>
<summary>記事を書いた経緯</summary>
<div>
筆者は [GiNZA](https://github.com/megagonlabs/ginza) の開発者の方々と何の利害関係もありません。
自然言語処理系の最新技術を検索していてたまたま見つけ、その簡単さに感動したので勝手に宣伝しています。
> 全ての開発は感動から始まる。
コンピュータ産業の父であり筆者の尊敬するエンジニアである池田敏雄さんはこのように言いました。この記事の目的は [GiNZA](https://github.com/megagonlabs/ginza) の感動を共有することです。
自然言語処理という難解な分野でありますが、なるべく事前知識なしで [GiNZA](https://github.com/megagonlabs/ginza) を楽しめるようにと願っています。
なお、最初にこの記事を書いたのは2019年の8月です。 [GiNZA](https://github.com/megagonlabs/ginza) の更新に追いつけなくなっていたので改めて書き直しました。
</div>
</details>
**想定する読者**
* 自然言語処理ってどんなことができるの?という初学者の方
* 筆者もまだまだ初学者ですが [GiNZA](https://github.com/megagonlabs/ginza) は簡単にすごいことができるのでぜひ見ていってください。
* Pythonを学びたての方やこれから学ぼうとしている方
* Python学習のモチベーションになると思います。
* [MeCab](https://taku910.github.io/mecab/) などの形態素解析器を使ったことはあるが [GiNZA](https://github.com/megagonlabs/ginza) は初めて聞いたという方
* 簡単に比較できるものではありませんが新たに [GiNZA](https://github.com/megagonlabs/ginza) を知る価値はあると思います。
## GiNZAとは

[GiNZA](https://github.com/megagonlabs/ginza) は **日本語の** 自然言語処理ライブラリです。
もともと [spaCy](https://spacy.io/) という自然言語処理のフレームワークがあり、英語など主要な言語に対応していました。 [GiNZA](https://github.com/megagonlabs/ginza) は言わば [spaCy](https://spacy.io/) の日本語対応版です。
詳細については [GiNZAの公開ページ](https://megagonlabs.github.io/ginza/) をご覧ください。
<details>
<summary>GiNZAを選ぶ理由</summary>
<div>
日本語の形態素解析器として有名なものに [MeCab](https://taku910.github.io/mecab/) があります(形態素解析って何?という方は [Web茶まめ ©国立国語研究所](https://chamame.ninjal.ac.jp/) にて実行してみてください)。 [GiNZA](https://github.com/megagonlabs/ginza) も同様に日本語の文を分かち書きすることができます。単に日本語を分かち書きしたいだけなら [MeCab](https://taku910.github.io/mecab/) の方が圧倒的に速いです。
それでも [GiNZA](https://github.com/megagonlabs/ginza) には次のメリットがあると思います。
* 簡単に導入できる
* [MeCab](https://taku910.github.io/mecab/) はOSに応じた環境構築を行わねばなりません。
* [GiNZA](https://github.com/megagonlabs/ginza) の導入はOSに関係なく `$ pip install ginza` でできます。
* [spaCy](https://spacy.io/) フレームワークを採用している
* 英語などの言語で [spaCy](https://spacy.io/) を利用した機械学習の実践例が見つかります。
* [GiNZA](https://github.com/megagonlabs/ginza) の登場で同じことが日本語でもできるようになりました。
* 例えばチャットボットAIのフレームワークで有名な [Rasa](https://rasa.com/) は [GiNZA](https://github.com/megagonlabs/ginza) のおかげで日本語でも使えるようになりました。
* [最新の研究](https://github.com/megagonlabs/ginza#training-data-sets) を反映している
なお、決して [MeCab](https://taku910.github.io/mecab/) をディスっているわけではないことを強調しておきます。状況や目的によって最適な選択は変わります。
[MeCab](https://taku910.github.io/mecab/) はすでに長期間使用されており、高速というだけでなく高い信頼性、そして豊富な実践例があります。
また、最新の語彙に随時対応し続ける [NEologd](https://github.com/neologd/mecab-ipadic-neologd) や、国立国語研究所が開発した [UniDic](https://unidic.ninjal.ac.jp/about_unidic) を利用することができるのも [MeCab](https://taku910.github.io/mecab/) のメリットだと思います。
</div>
</details>
## GiNZAを動かす
ここで紹介するコードは GitHubホストランナーの仮想環境 のubuntu-latest, macos-latestとPython 3.6, 3.7, 3.8の組み合わせ(計6通り)で動作検証しています。
**動作検証結果**
[](https://github.com/poyo46/ginza-examples/actions?query=workflow%3ATestGinzaExamples) (GiNZA Examples 本体)
[](https://github.com/poyo46/ginza-examples/actions?query=workflow%3ATestOther) (リンク切れのチェックなど)
### セットアップ
**オンライン**
Pythonに親しみのない方や手っ取り早く動作環境がほしい方向けにオンラインの実行環境を用意しています。ブラウザで [GiNZA examples - Repl.it](https://repl.it/github/poyo46/ginza-examples) を開いて `Run` をクリックしてください。セットアップ完了までに5分程度かかります。
**ローカル環境**
```
git clone https://github.com/poyo46/ginza-examples.git
cd ginza-examples
poetry install
```
もし `poetry` が未導入であれば `$ pip install poetry` でインストールしてください。
### 形態素解析
<details><summary>ソースコードを開く</summary><div>
```python:examples/token_information.py
import sys
from typing import List
from pprint import pprint
import spacy
import ginza
nlp = spacy.load('ja_ginza')
def tokenize(text: str) -> List[List[str]]:
"""
日本語文を形態素解析する。
Parameters
----------
text : str
解析対象の日本語テキスト。
Returns
-------
List[List[str]]
形態素解析結果。
Notes
-----
* Token 属性の詳細については次のリンク先をご覧ください。
https://spacy.io/api/token#attributes
* Token.lemma_ の値は SudachiPy の Morpheme.dictionary_form() です。
* Token.ent_type_ の詳細については次のリンク先をご覧ください。
http://liat-aip.sakura.ne.jp/ene/ene8/definition_jp/html/enedetail.html
"""
doc = nlp(text)
attrs_list = []
for token in doc:
token_attrs = [
token.i, # トークン番号
token.text, # テキスト
token.lemma_, # 基本形
ginza.reading_form(token), # 読みカナ
token.pos_, # 品詞
token.tag_, # 品詞詳細
ginza.inflection(token), # 活用情報
token.ent_type_ # 固有表現
]
attrs_list.append([str(a) for a in token_attrs])
return attrs_list
EXAMPLE_TEXT = '田中部長に伝えてください。'
EXAMPLE_SCRIPT = f'python examples/token_information.py {EXAMPLE_TEXT}'
ATTRS = [
'i', 'text', 'lemma_', 'reading_form', 'pos_', 'tag_',
'inflection', 'ent_type_'
]
if __name__ == '__main__':
if len(sys.argv) > 1:
input_text = sys.argv[1]
pprint(tokenize(input_text))
else:
print('Please run as follows: \n$ ' + EXAMPLE_SCRIPT)
```
</div></details>
[ソースコードをGitHubで見る](https://github.com/poyo46/ginza-examples/blob/master/examples/token_information.py)
**実行**
```
python examples/token_information.py 田中部長に伝えてください。
```
**結果**
| i | text | lemma_ | reading_form | pos_ | tag_ | inflection | ent_type_ |
| :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- |
| 0 | 田中 | 田中 | タナカ | PROPN | 名詞-固有名詞-人名-姓 | | Person |
| 1 | 部長 | 部長 | ブチョウ | NOUN | 名詞-普通名詞-一般 | | Position_Vocation |
| 2 | に | に | ニ | ADP | 助詞-格助詞 | | |
| 3 | 伝え | 伝える | ツタエ | VERB | 動詞-一般 | 下一段-ア行,連用形-一般 | |
| 4 | て | て | テ | SCONJ | 助詞-接続助詞 | | |
| 5 | ください | くださる | クダサイ | AUX | 動詞-非自立可能 | 五段-ラ行,命令形 | |
| 6 | 。 | 。 | 。 | PUNCT | 補助記号-句点 | | |
※結果は見やすいように加工しています。
<details>
<summary>説明を開く</summary>
<div>
日本語の文を単語ごとに分け、各単語の解析結果を表示しています。
`Token.pos_` は [Universal Part-of-speech Tags](https://spacy.io/api/annotation#pos-universal) と呼ばれるもので、言語に依存せず全世界的に単語の品詞を表そうというものです(Part-of-speech = 品詞)。
`Token.ent_type_` は固有表現と呼ばれるもので、例えば人名には `Person` が、料理名には `Dish` が割り当てられます。詳細な定義については [こちら](http://liat-aip.sakura.ne.jp/ene/ene8/definition_jp/html/enedetail.html) をご覧ください。
`Token` の他の属性については [spaCy API doc](https://spacy.io/api/token#attributes) をご覧ください。
</div>
</details>
**応用**
この解析結果を使って例えば次のようなことができます。
* 文中に含まれる単語から動詞と形容詞の原形を抽出する。
* 文中に含まれる食べ物を抽出してカウントする。
* 文中の個人情報をマスキングする。
### テキストを文のリストに分割する
<details><summary>ソースコードを開く</summary><div>
```python:examples/split_text.py
import sys
from typing import List
import spacy
nlp = spacy.load('ja_ginza')
def get_sentences(text: str) -> List[str]:
"""
文のリストに分割したテキストを取得する。
Parameters
----------
text : str
分割対象の日本語テキスト。
Returns
-------
List[str]
文のリスト。結合すると `text` に一致する。
See Also
--------
https://spacy.io/api/doc#sents
"""
doc = nlp(text)
sentences = [s.text for s in doc.sents]
return sentences
EXAMPLE_TEXT = 'はい、そうです。ありがとうございますよろしくお願いします。'
EXAMPLE_SCRIPT = f'python examples/split_text.py {EXAMPLE_TEXT}'
if __name__ == '__main__':
if len(sys.argv) > 1:
input_text = sys.argv[1]
print('\n'.join(get_sentences(input_text)))
else:
print('Please run as follows: \n$ ' + EXAMPLE_SCRIPT)
```
</div></details>
[ソースコードをGitHubで見る](https://github.com/poyo46/ginza-examples/blob/master/examples/split_text.py)
**実行**
```
python examples/split_text.py はい、そうです。ありがとうございますよろしくお願いします。
```
**結果**
```
はい、そうです。
ありがとうございます
よろしくお願いします。
```
<details>
<summary>説明を開く</summary>
<div>
[spaCy](https://spacy.io/) の [Doc.sents](https://spacy.io/api/doc#sents) を利用してテキストを文のリストに変換しています。
</div>
</details>
### 依存構造解析・可視化
<details><summary>ソースコードを開く</summary><div>
```python:examples/displacy.py
import sys
from pathlib import Path
import spacy
from spacy import displacy
nlp = spacy.load('ja_ginza')
def visualize(text: str) -> None:
"""
日本語文の文法的構造を解析し、可視化する。
Parameters
----------
text : str
解析対象の日本語テキスト。
Notes
-----
実行後、 ブラウザで http://localhost:5000 を開くと画像が表示される。
"""
doc = nlp(text)
displacy.serve(doc, style='dep')
def save_as_image(text: str, path) -> None:
"""
日本語文の文法的構造を解析し、結果を画像として保存する。
Parameters
----------
text : str
解析対象の日本語テキスト。
path
保存先のファイルパス。
Notes
-----
画像はSVG形式で保存される。
"""
doc = nlp(text)
svg = displacy.render(doc, style='dep')
with open(path, mode='w') as f:
f.write(svg)
EXAMPLE_TEXT = 'あのラーメン屋にはよく行く。'
EXAMPLE_SCRIPT = f'python examples/displacy.py {EXAMPLE_TEXT}'
if __name__ == '__main__':
if len(sys.argv) > 1:
input_text = sys.argv[1]
save_to = Path(__file__).with_name('displacy.svg')
save_as_image(input_text, save_to)
visualize(input_text)
else:
print('Please run as follows: \n$ ' + EXAMPLE_SCRIPT)
```
</div></details>
[ソースコードをGitHubで見る](https://github.com/poyo46/ginza-examples/blob/master/examples/displacy.py)
**実行**
```
python examples/displacy.py あのラーメン屋にはよく行く。
```
**結果**
```
Using the 'dep' visualizer
Serving on http://0.0.0.0:5000 ...
```

ブラウザで http://localhost:5000 を開くと解析結果が表示されます。同時に、サンプルコードでは画像を [examples/displacy.svg](https://raw.githubusercontent.com/poyo46/ginza-examples/master/examples/displacy.svg) に保存しています。
### 文章要約
LexRankアルゴリズムを用いて抽出型要約を実行します。抽出型要約とは、元の文から重要な文を(無加工で)抽出するものです。サンプル文として [『走れメロス』](https://github.com/poyo46/ginza-examples/blob/master/examples/data/run_melos.txt) を用意しました。
<details><summary>ソースコードを開く</summary><div>
```python:examples/lexrank_summary.py
import sys
import re
from typing import Tuple, List
from pathlib import Path
import numpy as np
import spacy
from sumy.summarizers.lex_rank import LexRankSummarizer
nlp = spacy.load('ja_ginza')
def preprocess(text: str) -> str:
"""
要約の前処理を実施する。
* 全角スペースや括弧、改行を削除する。
* !?を。に置換する
Parameters
----------
text : str
日本語のテキスト。
Returns
-------
str
前処理が実施されたtext
"""
text = re.sub('[ 「」『』【】\r\n]', '', text)
text = re.sub('[!?]', '。', text)
text = text.strip()
return text
def lexrank_scoring(text: str) -> Tuple[List[str], np.ndarray]:
"""
LexRankアルゴリズムによって文に点数をつける。
この点数は文の重要度とみなすことができる。
Parameters
----------
text : str
分析対象のテキスト。
Returns
-------
List[str]
text を文のリストに分解したもの。
np.ndarray
文のリストに対応する重要度のリスト。
"""
doc = nlp(text)
# 文のリストと単語のリストをつくる
sentences = []
corpus = []
for sent in doc.sents:
sentences.append(sent.text)
tokens = []
for token in sent:
# 文に含まれる単語のうち, 名詞・副詞・形容詞・動詞に限定する
if token.pos_ in ('NOUN', 'ADV', 'ADJ', 'VERB'):
# ぶれをなくすため, 単語の見出し語 Token.lemma_ を使う
tokens.append(token.lemma_)
corpus.append(tokens)
# sentences = [文0, 文1, ...]
# corpus = [[文0の単語0, 文0の単語1, ...], [文1の単語0, 文1の単語1, ...], ...]
# sumyライブラリによるLexRankスコアリング
lxr = LexRankSummarizer()
tf_metrics = lxr._compute_tf(corpus)
idf_metrics = lxr._compute_idf(corpus)
matrix = lxr._create_matrix(corpus, lxr.threshold, tf_metrics, idf_metrics)
scores = lxr.power_method(matrix, lxr.epsilon)
# scores = [文0の重要度, 文1の重要度, ...]
return sentences, scores
def extract(sentences: List[str], scores: np.ndarray, n: int) -> List[str]:
"""
スコアの高い順にn個の文を抽出する。
Parameters
----------
sentences : List[str]
文のリスト。
scores : np.ndarray
スコアのリスト。
n : int
抽出する文の数。
Returns
-------
List[str]
sentencesから抽出されたn個の文のリスト。
"""
assert len(sentences) == len(scores)
# scoresのインデックスリスト
indices = range(len(scores))
# スコアの大きい順に並べ替えたリスト
sorted_indices = sorted(indices, key=lambda i: scores[i], reverse=True)
# スコアの大きい順からn個抽出したリスト
extracted_indices = sorted_indices[:n]
# インデックスの並び順をもとに戻す
extracted_indices.sort()
# 抽出されたインデックスに対応する文のリストを返す
return [sentences[i] for i in extracted_indices]
def get_summary(path, n) -> List[str]:
with open(path, mode='rt', encoding='utf-8') as f:
text = f.read()
text = preprocess(text)
sentences, scores = lexrank_scoring(text)
return extract(sentences, scores, n)
EXAMPLE_PATH = Path(__file__).with_name('data') / 'run_melos.txt'
N = 15
EXAMPLE_SCRIPT = f'python examples/lexrank_summary.py examples/data/run_melos.txt {N}'
if __name__ == '__main__':
if len(sys.argv) > 2:
input_path = sys.argv[1]
input_n = int(sys.argv[2])
extracted_sentences = get_summary(input_path, input_n)
print('\n'.join(extracted_sentences))
else:
print('Please run as follows: \n$ ' + EXAMPLE_SCRIPT)
```
</div></details>
[ソースコードをGitHubで見る](https://github.com/poyo46/ginza-examples/blob/master/examples/lexrank_summary.py)
**実行**
```
python examples/lexrank_summary.py examples/data/run_melos.txt 15
```
**結果**
```
人を、信ずる事が出来ぬ、というのです。
三日のうちに、私は村で結婚式を挙げさせ、必ず、ここへ帰って来ます。
そうして身代りの男を、三日目に殺してやるのも気味がいい。
ものも言いたくなくなった。
そうして、少し事情があるから、結婚式を明日にしてくれ、と頼んだ。
あれが沈んでしまわぬうちに、王城に行き着くことが出来なかったら、あの佳い友達が、私のために死ぬのです。
何をするのだ。
けれども、今になってみると、私は王の言うままになっている。
王は、ひとり合点して私を笑い、そうして事も無く私を放免するだろう。
私を、待っている人があるのだ。
死んでお詫び、などと気のいい事は言って居られぬ。
メロス。
その人を殺してはならぬ。
メロスが帰って来た。
メロスだ。
```
LexRankアルゴリズムによって抽出された、重要度の高い上位 15 文です。重要度のスコアは一度だけ計算すればよいため、抽出する文の数を変更したい場合は [lexrank_scoring](https://github.com/poyo46/ginza-examples/blob/master/examples/lexrank_summary.py#L34) の結果を再利用すると速いです。
### 文の類似度
<details><summary>ソースコードを開く</summary><div>
```python:examples/similarity.py
import sys
from typing import List
from pprint import pprint
import spacy
import numpy as np
nlp = spacy.load('ja_ginza')
def similarity_matrix(texts: List[str]) -> List[List[np.float64]]:
"""
テキスト同士の類似度を計算する。
Parameters
----------
texts : str
日本語文のリスト。
Returns
-------
List[List[np.float64]]
文同士の類似度。
Notes
-----
spaCy の Doc.similarity (https://spacy.io/api/doc#similarity) を使っている。
"""
docs = [nlp(text) for text in texts]
rows = [[a.similarity(b) for a in docs] for b in docs]
return rows
EXAMPLE_TEXTS = [
'今日はとても良い天気です。',
'昨日の天気は大雨だったのに。',
'ラーメンを食べました。'
]
EXAMPLE_SCRIPT = f'python examples/similarity.py ' + ' '.join(EXAMPLE_TEXTS)
if __name__ == '__main__':
if len(sys.argv) > 2:
input_texts = sys.argv[1:]
pprint(similarity_matrix(input_texts))
else:
print('Please run as follows: \n$ ' + EXAMPLE_SCRIPT)
```
</div></details>
[ソースコードをGitHubで見る](https://github.com/poyo46/ginza-examples/blob/master/examples/similarity.py)
**実行**
```
python examples/similarity.py 今日はとても良い天気です。 昨日の天気は大雨だったのに。 ラーメンを食べました。
```
**結果**
| | 今日はとても良い天気です。 | 昨日の天気は大雨だったのに。 | ラーメンを食べました。 |
| :-- | :-- | :-- | :-- |
| 今日はとても良い天気です。 | 1.0 | 0.9085916084662856 | 0.7043564497093551 |
| 昨日の天気は大雨だったのに。 | 0.9085916084662856 | 1.0 | 0.7341796340817486 |
| ラーメンを食べました。 | 0.7043564497093551 | 0.7341796340817486 | 1.0 |
※結果は見やすいように加工しています。
<details>
<summary>説明を開く</summary>
<div>
[spaCy](https://spacy.io/) の [Doc.similarity()](https://spacy.io/api/doc#similarity) を利用して文同士の類似度を計算しています。自分自身との類似度は1で、類似度の値が大きいほど似ているということです。
</div>
</details>
## ライセンス
### GiNZA
[GiNZA](https://github.com/megagonlabs/ginza) そのものは [MIT License](https://github.com/megagonlabs/ginza/blob/develop/LICENSE) で利用できます。詳しくは [ライセンス条項](https://github.com/megagonlabs/ginza#license) をご覧ください。
### GiNZA examples
筆者の [Qiitaの記事](https://qiita.com/poyo46/items/7a4965455a8a2b2d2971) および [GiNZA examples - GitHub](https://github.com/poyo46/ginza-examples) も同様に [MIT License](https://github.com/poyo46/ginza-examples/blob/master/LICENSE) で利用できます。
## 注意事項
* [GiNZA](https://github.com/megagonlabs/ginza) を利用できる言語はPythonのみです。しかしフレームワークである [spaCy](https://spacy.io/) にはJavaScript版やR版など [Python以外の言語での実装](https://spacy.io/universe/category/nonpython) があるため、すごい人たちが移植してくれることを期待します。
* 単語のネガティブ・ポジティブを数値化する [Token.sentiment](https://spacy.io/api/token#attributes) は現時点で実装されていませんが、 [GiNZA](https://github.com/megagonlabs/ginza) 開発者の方から直々にコメントをいただき、今後実装を計画していただけるとのことです。
## ご意見・ご要望など
ご意見・ご要望などは随時受け付けています。 [Qiitaの記事](https://qiita.com/poyo46/items/7a4965455a8a2b2d2971) へコメント、または [GitHubのIssues](https://github.com/poyo46/ginza-examples/issues) へ投稿をお願いします。
|
[
"Named Entity Recognition",
"Semantic Text Processing",
"Syntactic Parsing",
"Syntactic Text Processing",
"Tagging"
] |
[] |
true |
https://github.com/PKSHATechnology-Research/tdmelodic
|
2020-09-14T09:12:44Z
|
A Japanese accent dictionary generator
|
PKSHATechnology-Research / tdmelodic
Public
Branches
Tags
Go to file
Go to file
Code
.github/workflows
docs
tdmelodic
tests
.gitignore
.readthedocs.yaml
Dockerfile
LICENSE
MANIFEST.in
README.md
packege.sh
requirements.txt
setup.py
docs
docs passing
passing arXiv
arXiv 2009.09679
2009.09679
Python unittest
Python unittest passing
passing
Docker
Docker
passing
passing
Lilypond
Lilypond passing
passing License
License BSD 3-Clause
BSD 3-Clause
This module generates a large scale accent dictionary of Japanese (Tokyo dialect) using a neural network based technique.
For academic use, please cite the following paper. [IEEE Xplore] [arXiv]
About
A Japanese accent dictionary
generator
# nlp # japanese # speech-synthesis # accent
Readme
BSD-3-Clause license
Activity
Custom properties
112 stars
6 watching
11 forks
Report repository
Releases 1
package v1.0.0 (#9)
Latest
on Jul 3, 2021
Packages
No packages published
Contributors
2
tachi-hi Hideyuki Tachibana
uehara1414 uehara
Languages
Python 98.1%
Dockerfile 1.2%
Shell 0.7%
Code
Issues
1
Pull requests
Actions
Projects
Security
Insights
Tokyo Dialect MELOdic accent DICtionary (tdmelodic)
generator
@inproceedings{tachibana2020icassp,
author = "H. Tachibana and Y. Katayama",
title = "Accent Estimation of {Japanese} Words from Their Surfaces and Romanizations
for Building Large Vocabulary Accent Dictionaries",
booktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
README
BSD-3-Clause license
English: tdmelodic Documentation
日本語: tdmelodic 利用マニュアル
Some part of this work is based on the results obtained from a project subsidized by the New Energy and Industrial Technology
Development Organization (NEDO).
pages = "8059--8063",
year = "2020",
doi = "10.1109/ICASSP40776.2020.9054081"
}
Installation and Usage
Acknowledgement
|
<p align="center">
<img src="https://github.com/PKSHATechnology-Research/tdmelodic/raw/master/docs/imgs/logo/logo_tdmelodic.svg" width="200" />
</p>
# Tokyo Dialect MELOdic accent DICtionary (tdmelodic) generator
[](https://tdmelodic.readthedocs.io/en/latest)
[](https://arxiv.org/abs/2009.09679)
[](https://github.com/PKSHATechnology-Research/tdmelodic/actions/workflows/test.yml)
[](https://github.com/PKSHATechnology-Research/tdmelodic/actions/workflows/docker-image.yml)
[](https://github.com/PKSHATechnology-Research/tdmelodic/actions/workflows/img.yml)
[](https://opensource.org/licenses/BSD-3-Clause)
This module generates a large scale accent dictionary of
Japanese (Tokyo dialect) using a neural network based technique.
For academic use, please cite the following paper.
[[IEEE Xplore]](https://ieeexplore.ieee.org/document/9054081)
[[arXiv]](https://arxiv.org/abs/2009.09679)
```bibtex
@inproceedings{tachibana2020icassp,
author = "H. Tachibana and Y. Katayama",
title = "Accent Estimation of {Japanese} Words from Their Surfaces and Romanizations
for Building Large Vocabulary Accent Dictionaries",
booktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages = "8059--8063",
year = "2020",
doi = "10.1109/ICASSP40776.2020.9054081"
}
```
## Installation and Usage
- English: [tdmelodic Documentation](https://tdmelodic.readthedocs.io/en/latest)
- 日本語: [tdmelodic 利用マニュアル](https://tdmelodic.readthedocs.io/ja/latest)
## Acknowledgement
Some part of this work is based on the results obtained from a project subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
|
[
"Multimodality",
"Phonology",
"Speech & Audio in NLP"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/cl-tohoku/PheMT
|
2020-10-27T08:04:59Z
|
A phenomenon-wise evaluation dataset for Japanese-English machine translation robustness. The dataset is based on the MTNT dataset, with additional annotations of four linguistic phenomena; Proper Noun, Abbreviated Noun, Colloquial Expression, and Variant. COLING 2020.
|
cl-tohoku / PheMT
Public
Branches
Tags
Go to file
Go to file
Code
abbrev
colloq
eval_to…
proper
src
variant
READ…
mtnt_a…
PheMT is a phenomenon-wise dataset designed for evaluating the robustness of Japanese-
English machine translation systems. The dataset is based on the MTNT dataset
, with additional
annotations of four linguistic phenomena common in UGC; Proper Noun, Abbreviated Noun,
Colloquial Expression, and Variant. COLING 2020.
See the paper for more information.
New!! ready-to-use evaluation tools are now available! (Feb. 2021)
About
A phenomenon-wise evaluation
dataset for Japanese-English
machine translation robustness.
The dataset is based on the MTNT
dataset, with additional annotations
of four linguistic phenomena;
Proper Noun, Abbreviated Noun,
Colloquial Expression, and Variant.
COLING 2020.
Readme
Activity
Custom properties
14 stars
6 watching
2 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
PheMT: A Phenomenon-wise Dataset for
Machine Translation Robustness on User-
Generated Contents
Introduction
[1]
README
This repository contains the following.
Please feed both original and normalized versions of source sentences to your model to get the
difference of arbitrary metrics as a robustness measure. Also, we extracted translations for
expressions presenting targeted phenomena. We recommend using src/calc_acc.py to
measure the effect of each phenomenon more directly with the help of translation accuracy.
USAGE: python calc_acc.py system_output {proper, abbrev, colloq,
variant}.alignment
Statistics
Dataset
# sent.
# unique expressions (ratio)
average edit distance
Proper Noun
943
747 (79.2%)
(no normalized version)
Abbreviated Noun
348
234 (67.2%)
5.04
About this repository
.
├── README.md
├── mtnt_approp_annotated.tsv # pre-filtered MTNT dataset with annotated
appropriateness (See Appendix A)
├── proper
│ ├── proper.alignment # translations of targeted expressions
│ ├── proper.en # references
│ ├── proper.ja # source sentences
│ └── proper.tsv
├── abbrev
│ ├── abbrev.alignment
│ ├── abbrev.en
│ ├── abbrev.norm.ja # normalized source sentences
│ ├── abbrev.orig.ja # original source sentences
│ └── abbrev.tsv
├── colloq
│ ├── colloq.alignment
│ ├── colloq.en
│ ├── colloq.norm.ja
│ ├── colloq.orig.ja
│ └── colloq.tsv
├── variant
│ ├── variant.alignment
│ ├── variant.en
│ ├── variant.norm.ja
│ ├── variant.orig.ja
│ └── variant.tsv
└── src
└── calc_acc.py # script for calculating translation accuracy
Basic statistics and examples from the dataset
Dataset
# sent.
# unique expressions (ratio)
average edit distance
Colloquial Expression
172
153 (89.0%)
1.77
Variant
103
97 (94.2%)
3.42
Examples
If you use our dataset for your research, please cite the following paper:
- Abbreviated Noun
original source : 地味なアプデ (apude, meaning update) だが
normalized source : 地味なアップデート (update) だが
reference : That’s a plain update though
alignment : update
- Colloquial Expression
original source : ここまで描いて飽きた、かなちい (kanachii, meaning sad)
normalized source : ここまで描いて飽きた、かなしい (kanashii)
reference : Drawing this much then getting bored, how sad.
alignment : sad
Citation
@inproceedings{fujii-etal-2020-phemt,
title = "{P}he{MT}: A Phenomenon-wise Dataset for Machine Translation
Robustness on User-Generated Contents",
author = "Fujii, Ryo and
Mita, Masato and
Abe, Kaori and
Hanawa, Kazuaki and
Morishita, Makoto and
Suzuki, Jun and
Inui, Kentaro",
booktitle = "Proceedings of the 28th International Conference on
Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.521",
pages = "5929--5943",
}
Reference
|
# PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents
## Introduction
PheMT is a phenomenon-wise dataset designed for evaluating the robustness of Japanese-English machine translation systems.
The dataset is based on the MTNT dataset<sup>[1]</sup>, with additional annotations of four linguistic phenomena common in UGC; Proper Noun, Abbreviated Noun, Colloquial Expression, and Variant.
COLING 2020.
See [the paper](https://www.aclweb.org/anthology/2020.coling-main.521) for more information.
New!! ready-to-use [evaluation tools](https://github.com/cl-tohoku/PheMT/tree/main/eval_tools) are now available! (Feb. 2021)
## About this repository
This repository contains the following.
```
.
├── README.md
├── mtnt_approp_annotated.tsv # pre-filtered MTNT dataset with annotated appropriateness (See Appendix A)
├── proper
│ ├── proper.alignment # translations of targeted expressions
│ ├── proper.en # references
│ ├── proper.ja # source sentences
│ └── proper.tsv
├── abbrev
│ ├── abbrev.alignment
│ ├── abbrev.en
│ ├── abbrev.norm.ja # normalized source sentences
│ ├── abbrev.orig.ja # original source sentences
│ └── abbrev.tsv
├── colloq
│ ├── colloq.alignment
│ ├── colloq.en
│ ├── colloq.norm.ja
│ ├── colloq.orig.ja
│ └── colloq.tsv
├── variant
│ ├── variant.alignment
│ ├── variant.en
│ ├── variant.norm.ja
│ ├── variant.orig.ja
│ └── variant.tsv
└── src
└── calc_acc.py # script for calculating translation accuracy
```
Please feed both original and normalized versions of source sentences to your model to get the difference of arbitrary metrics as a robustness measure.
Also, we extracted translations for expressions presenting targeted phenomena.
We recommend using `src/calc_acc.py` to measure the effect of each phenomenon more directly with the help of translation accuracy.
USAGE: `python calc_acc.py system_output {proper, abbrev, colloq, variant}.alignment`
## Basic statistics and examples from the dataset
- Statistics
| Dataset | # sent. | # unique expressions (ratio) | average edit distance |
| ---- | ---- | ---- | ---- |
| Proper Noun | 943 | 747 (79.2%) | (no normalized version) |
| Abbreviated Noun | 348 | 234 (67.2%) | 5.04 |
| Colloquial Expression | 172 | 153 (89.0%) | 1.77 |
| Variant | 103 | 97 (94.2%) | 3.42 |
- Examples
```
- Abbreviated Noun
original source : 地味なアプデ (apude, meaning update) だが
normalized source : 地味なアップデート (update) だが
reference : That’s a plain update though
alignment : update
- Colloquial Expression
original source : ここまで描いて飽きた、かなちい (kanachii, meaning sad)
normalized source : ここまで描いて飽きた、かなしい (kanashii)
reference : Drawing this much then getting bored, how sad.
alignment : sad
```
## Citation
If you use our dataset for your research, please cite the following paper:
```
@inproceedings{fujii-etal-2020-phemt,
title = "{P}he{MT}: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents",
author = "Fujii, Ryo and
Mita, Masato and
Abe, Kaori and
Hanawa, Kazuaki and
Morishita, Makoto and
Suzuki, Jun and
Inui, Kentaro",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.521",
pages = "5929--5943",
}
```
## Reference
[1] Michel and Neubig (2018), MTNT: A Testbed for Machine Translation of Noisy Text.
|
[
"Machine Translation",
"Multilinguality",
"Responsible & Trustworthy NLP",
"Robustness in NLP",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/tsuruoka-lab/AMI-Meeting-Parallel-Corpus
|
2020-12-11T04:54:03Z
|
AMI Meeting Parallel Corpus
|
tsuruoka-lab / AMI-Meeting-Parallel-Corpus
Public
Branches
Tags
Go to file
Go to file
Code
LICENSE.md
README.md
dev.json
test.json
train.json
©2020, The University of Tokyo
The original AMI Meeting Corpus is a multi-modal dataset containing 100 hours of meeting recordings in English. The parallel version was
constructed by asking professional translators to translate utterances from the original corpus into Japanese. Since the original corpus consists
of speech transcripts, the English sentences contain a lot of short utterances (e.g., "Yeah", "Okay") or fillers (e.g., "Um"), and these are
translated into Japanese as well. Therefore, it contains many duplicate sentences.
We provide training, development and evaluation splits from the AMI Meeting Parallel Corpus. In this repository we publicly share the full
development and evaluation sets and a part of the training data set.
Training
Development
Evaluation
Sentences
20,000
2,000
2,000
Scenarios
30
5
5
The corpus is structured in json format consisting of documents, which consist of sentence pairs. Each sentence pair has a sentence number,
speaker identifier (to distinguish different speakers), text in English and Japanese, and original language (always English).
About
AMI Meeting Parallel Corpus
# japanese # machine-translation # corpus
# english # parallel-corpus # parallel-corpora
# annotated-corpora # document-aligned
Readme
CC-BY-4.0 license
Activity
Custom properties
9 stars
3 watching
1 fork
Report repository
Releases
No releases published
Packages
No packages published
Contributors
2
Code
Issues
Pull requests
Actions
Projects
Security
Insights
The AMI Meeting Parallel Corpus
Corpus Description
Corpus Structure
[
{
"id": "IS1004a",
"original_language": "en",
"conversation": [
...,
{
"no": 22,
"speaker": "A",
"ja_sentence": "では、このプロジェクトの目的は、あー、新しいリモコンを作ることです。",
"en_sentence": "So, the goal of this project is to uh developed a new remote
},
...
]
},
...
]
README
CC-BY-4.0 license
Our dataset is released under the Creative Commons Attribution-ShareAlike (CC BY 4.0) license.
If you use this dataset, please cite the following paper: Matīss Rikters, Ryokan Ri, Tong Li, and Toshiaki Nakazawa (2020). "Document-aligned
Japanese-English Conversation Parallel Corpus." In Proceedings of the Fifth Conference on Machine Translation, 2020.
This work was supported by "Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation", the
Commissioned Research of National Institute of Information and Communications Technology (NICT), JAPAN.
License
Reference
@InProceedings{rikters-EtAl:2020:WMT,
author = {Rikters, Matīss and Ri, Ryokan and Li, Tong and Nakazawa, Toshiaki},
title = {Document-aligned Japanese-English Conversation Parallel Corpus},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
month = {November},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
pages = {637--643},
url = {https://www.aclweb.org/anthology/2020.wmt-1.74}
}
Acknowledgements
|
# The AMI Meeting Parallel Corpus
©2020, The University of Tokyo
# Corpus Description
The [original AMI Meeting Corpus](http://groups.inf.ed.ac.uk/ami/corpus/index.shtml) is a multi-modal dataset containing 100 hours of meeting recordings in English.
The parallel version was constructed by asking professional translators to translate utterances from the original corpus into Japanese. Since the original corpus consists of speech transcripts, the English sentences contain a lot of short utterances (e.g., "Yeah", "Okay") or fillers (e.g., "Um"), and these are translated into Japanese as well. Therefore, it contains many duplicate sentences.
We provide training, development and evaluation splits from the AMI Meeting Parallel Corpus. In this repository we publicly share the full development and evaluation sets and a part of the training data set.
| | Training | Development | Evaluation |
|-------- |---------: |:-----------: |:----------: |
| Sentences | 20,000 | 2,000 | 2,000 |
| Scenarios | 30 | 5 | 5 |
# Corpus Structure
The corpus is structured in json format consisting of documents, which consist of sentence pairs. Each sentence pair has a sentence number, speaker identifier (to distinguish different speakers), text in English and Japanese, and original language (always English).
```json
[
{
"id": "IS1004a",
"original_language": "en",
"conversation": [
...,
{
"no": 22,
"speaker": "A",
"ja_sentence": "では、このプロジェクトの目的は、あー、新しいリモコンを作ることです。",
"en_sentence": "So, the goal of this project is to uh developed a new remote control."
},
...
]
},
...
]
```
## License
Our dataset is released under the [Creative Commons Attribution-ShareAlike (CC BY 4.0) license](https://creativecommons.org/licenses/by/4.0/legalcode).
## Reference
If you use this dataset, please cite the following paper:
Matīss Rikters, Ryokan Ri, Tong Li, and Toshiaki Nakazawa (2020). "[Document-aligned Japanese-English Conversation Parallel Corpus](http://www.statmt.org/wmt20/pdf/2020.wmt-1.74.pdf)." In Proceedings of the Fifth Conference on Machine Translation, 2020.
```bibtex
@InProceedings{rikters-EtAl:2020:WMT,
author = {Rikters, Matīss and Ri, Ryokan and Li, Tong and Nakazawa, Toshiaki},
title = {Document-aligned Japanese-English Conversation Parallel Corpus},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
month = {November},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
pages = {637--643},
url = {https://www.aclweb.org/anthology/2020.wmt-1.74}
}
```
## Acknowledgements
This work was supported by "Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation", the Commissioned Research of National Institute of Information and Communications Technology (NICT), JAPAN.
|
[
"Machine Translation",
"Multilinguality",
"Speech & Audio in NLP",
"Text Generation"
] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/mkartawijaya/hasami
|
2020-12-29T19:45:15Z
|
A tool to perform sentence segmentation on Japanese text
|
mkartawijaya / hasami
Public
Branches
Tags
Go to file
Go to file
Code
.github/workflo…
hasami
tests
.gitignore
LICENSE
README.md
setup.py
Hasami is a tool to perform sentence segmentation on japanese text.
Sentences are split on common sentence ending markers like !?。
Enclosed sentence endings will not be split, i.e. those inside quotes or parentheses.
Runs of sentence ending markers are treated as a single sentence ending.
You can configure custom sentence ending markers and enclosures if the defaults don't cover your needs.
You can define exceptions for when not to split sentences.
A simple command line interface is provided. Input is read from stdin or from a file.
About
A tool to perform sentence
segmentation on Japanese text
# nlp # japanese # japanese-language
# sentence-tokenizer # sentence-segmentation
# sentence-boundary-detection
Readme
BSD-3-Clause license
Activity
5 stars
1 watching
0 forks
Report repository
Releases 1
First Release
Latest
on Feb 21, 2021
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Hasami
Installation
$ pip install hasami
Usage
$ echo "これが最初の文。これは二番目の文。これが最後の文。" | tee input.txt | hasami
これが最初の文。
これは二番目の文。
これが最後の文。
$ hasami input.txt
これが最初の文。
README
BSD-3-Clause license
Usage in code:
More examples:
The defaults should work for most of the punctuation found in natural text but it is possible to define custom
enclosures and sentence ending markers if necessary. You can also define exceptions for when sentence
segmentation should not happen, for example in cases of untypical use of punctuation.
Released under the BSD-3-Clause License
これは二番目の文。
これが最後の文。
import hasami
hasami.segment_sentences('これが最初の文。これは二番目の文。これが最後の文。')
# => ['これが最初の文。', 'これは二番目の文。', 'これが最後の文。']
import hasami
# Instead of splitting you can also just insert newlines.
hasami.insert_newlines('これが最初の文。これは二番目の文。これが最後の文。')
# => 'これが最初の文。\nこれは二番目の文。\nこれが最後の文。\n'
# Runs of sentence ending markers are treated as a single sentence ending.
hasami.segment_sentences('え、本当…!?嘘だろ…')
# => ['え、本当…!?', '嘘だろ…']
# Enclosed sentence endings are ignored.
hasami.segment_sentences('「うまく行くかな?」と思った。')
# => ['「うまく行くかな?」と思った。']
Customization
from hasami import Hasami, DEFAULT_ENCLOSURES, DEFAULT_SENTENCE_ENDING_MARKERS
# Pass a string of pairs of opening/closing characters to define custom enclosures.
with_custom_enclosures = Hasami(enclosures=DEFAULT_ENCLOSURES + '<>')
with_custom_enclosures.segment_sentences('<うまく行くかな?>と思った。')
# => ['<うまく行くかな?>と思った。']
# Pass an empty string if you want all enclosures to be ignored.
without_enclosures = Hasami(enclosures='')
without_enclosures.segment_sentences('「うまく行くかな?」と思った。')
# => ['「うまく行くかな?', '」と思った。']
# Pass a string of characters that should be considered as sentence ending markers.
with_custom_endings = Hasami(sentence_ending_markers=DEFAULT_SENTENCE_ENDING_MARKERS + '.,')
with_custom_endings.segment_sentences('これが最初の文.これは二番目の文,これが最後の文.')
# => ['これが最初の文.', 'これは二番目の文,', 'これが最後の文.']
# Pass a list of patterns to define exceptions where segmentation should not happen.
# Make sure to include the newline which should be removed in the pattern.
with_exceptions = Hasami(exceptions=['君の名は。\n'])
with_exceptions.segment_sentences('君の名は。見たことあるの?')
# => ['君の名は。見たことあるの?']
License
|
# Hasami
Hasami is a tool to perform sentence segmentation on japanese text.
* Sentences are split on common sentence ending markers like `!?。`
* Enclosed sentence endings will not be split, i.e. those inside quotes or parentheses.
* Runs of sentence ending markers are treated as a single sentence ending.
* You can configure custom sentence ending markers and enclosures if the defaults don't cover your needs.
* You can define exceptions for when not to split sentences.
## Installation
```bash
$ pip install hasami
```
## Usage
A simple command line interface is provided. Input is read from `stdin` or from a file.
```bash
$ echo "これが最初の文。これは二番目の文。これが最後の文。" | tee input.txt | hasami
これが最初の文。
これは二番目の文。
これが最後の文。
$ hasami input.txt
これが最初の文。
これは二番目の文。
これが最後の文。
```
Usage in code:
```python
import hasami
hasami.segment_sentences('これが最初の文。これは二番目の文。これが最後の文。')
# => ['これが最初の文。', 'これは二番目の文。', 'これが最後の文。']
```
More examples:
```python
import hasami
# Instead of splitting you can also just insert newlines.
hasami.insert_newlines('これが最初の文。これは二番目の文。これが最後の文。')
# => 'これが最初の文。\nこれは二番目の文。\nこれが最後の文。\n'
# Runs of sentence ending markers are treated as a single sentence ending.
hasami.segment_sentences('え、本当…!?嘘だろ…')
# => ['え、本当…!?', '嘘だろ…']
# Enclosed sentence endings are ignored.
hasami.segment_sentences('「うまく行くかな?」と思った。')
# => ['「うまく行くかな?」と思った。']
```
## Customization
The defaults should work for most of the punctuation found in natural
text but it is possible to define custom enclosures and sentence ending markers if necessary.
You can also define exceptions for when sentence segmentation should not happen,
for example in cases of untypical use of punctuation.
```python
from hasami import Hasami, DEFAULT_ENCLOSURES, DEFAULT_SENTENCE_ENDING_MARKERS
# Pass a string of pairs of opening/closing characters to define custom enclosures.
with_custom_enclosures = Hasami(enclosures=DEFAULT_ENCLOSURES + '<>')
with_custom_enclosures.segment_sentences('<うまく行くかな?>と思った。')
# => ['<うまく行くかな?>と思った。']
# Pass an empty string if you want all enclosures to be ignored.
without_enclosures = Hasami(enclosures='')
without_enclosures.segment_sentences('「うまく行くかな?」と思った。')
# => ['「うまく行くかな?', '」と思った。']
# Pass a string of characters that should be considered as sentence ending markers.
with_custom_endings = Hasami(sentence_ending_markers=DEFAULT_SENTENCE_ENDING_MARKERS + '.,')
with_custom_endings.segment_sentences('これが最初の文.これは二番目の文,これが最後の文.')
# => ['これが最初の文.', 'これは二番目の文,', 'これが最後の文.']
# Pass a list of patterns to define exceptions where segmentation should not happen.
# Make sure to include the newline which should be removed in the pattern.
with_exceptions = Hasami(exceptions=['君の名は。\n'])
with_exceptions.segment_sentences('君の名は。見たことあるの?')
# => ['君の名は。見たことあるの?']
```
## License
Released under the BSD-3-Clause License
|
[
"Syntactic Text Processing",
"Text Segmentation"
] |
[] |
true |
https://github.com/s-taka/fugumt
|
2021-01-11T07:23:13Z
|
ぷるーふおぶこんせぷと で公開した機械翻訳エンジンを利用する翻訳環境です。 フォームに入力された文字列の翻訳、PDFの翻訳が可能です。
|
s-taka / fugumt
Public
Branches
Tags
Go to file
Go to file
Code
cache
db
docker
fugumt
log
model
pdf
pickle
template
README.md
config.json
pdf_server.py
run.py
server.py
setenv_and_cache.py
ぷるーふおぶこんせぷと で公開した機械翻訳エンジンを利用する翻訳環境です。 フォームに入力された文字列の翻訳、PDFの翻訳が可能で
す。
あくまで検証用の環境・ソフトウェアであり、公開サーバ上での実行を想定したものではありません。 (上記BLOGで公開されているWEB
環境で用いているソフトウェアではありません。)
Dockerがセットアップされている場合、下記のように実行できます。
1. git clone後、model/ 以下に「FuguMT model 」で配布されているモデルをダウンロード、展開
git clone http://github.com/s-taka/fugumt
wget https://fugumt.com/FuguMT_ver.202011.1.zip
shasum FuguMT_ver.202011.1.zip
ハッシュ値が 0cf8a1fc540b4c7b4388b75b71858c0eb32e392a であることを確認
unzip FuguMT_ver.202011.1.zip
解凍した場所から移動 mv model/* fugumt/model
2. Docker環境を構築
cd fugumt/docker
About
No description, website, or topics
provided.
Readme
Activity
61 stars
1 watching
2 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 95.9%
Dockerfile 4.1%
Code
Issues
2
Pull requests
Actions
Projects
Security
Insights
Fugu-Machine Translator
Usage
翻訳サーバの実行
README
docker build -t fugu_mt .
3. コンテナを実行
docker run -v /path_to/fugumt/:/app/fugu_mt -p 127.0.0.1:8888:8080 -it --user `id -u`:`id -g` --rm fugu_mt python3
/app/fugu_mt/run.py /app/fugu_mt/config.json
「/path_to」は環境に合わせて変更してください。git cloneを行った先のディレクトリを指定する必要があります。
Load completeと表示されたら実行ができています。
実行後、http://localhost:8888/ にアクセスすることで翻訳エンジンを利用可能です。
http://localhost:8888/pdf_upload/ からPDFの翻訳を行うことができます。 パスワードはconfig.jsonの"auth_info"から設定可能です。 デフォル
トではpdf/pdfとなっています。
本ソフトウェアは信頼できるネットワーク上での実行を前提に利用してください。
翻訳サーバの実行の2.まで構築が終わっていれば、環境変数を設定し、コマンドラインからPDFを翻訳することもできます。
1. Dockerコンテナ起動
docker run -v /path_to/fugumt/:/app/fugu_mt -it --user `id -u`:`id -g` --rm fugu_mt bash
2. 環境変数を設定、カレントディレクトリの変更
3. コマンド実行
python3 /app/fugu_mt/pdf_server.py --pdf Dockerコンテナ上のPDFパス --out Dockerコンテナ上のpickle保存場所 --out_html
Dockerコンテナ上のHTML保存場所 --mk_process 1 /app/fugu_mt/config.json
より簡易にモデルを試す場合は以下の手順でテストが可能です。 Docker build、モデルのダウンロードは「翻訳サーバの実行」と同じです。
docker run -v /path_to/fugumt/:/app/fugu_mt -it --user `id -u`:`id -g` --rm fugu_mt bash
「/path_to」は環境に合わせて変更してください。git cloneを行った先のディレクトリを指定する必要があります。
cd /app/fugu_mt
echo "Fugu MT model" | /app/marian/build/marian-decoder -c model/model.npz.decoder.yml
下記のように_uncasedを指定すると、大文字・小文字を無視した翻訳を行います。
echo "Fugu MT model" | /app/marian/build/marian-decoder -c model/model_uncased.npz.decoder.yml
ライブラリを通した翻訳は下記のように実行することができます。 Docker build、モデルのダウンロードは「翻訳サーバの実行」と同じで
す。
1. 環境変数を設定
2. pythonコードを実行(python3を実行後に入力)
pdfの翻訳
export TFHUB_CACHE_DIR=/app/fugu_mt/cache/
export NLTK_DATA=/app/fugu_mt/cache/
export ALLENNLP_CACHE_ROOT=/app/fugu_mt/cache/
cd /app/fugu_mt/
marian-decoderの実行
fugumtライブラリの実行
export TFHUB_CACHE_DIR=/app/fugu_mt/cache/
export NLTK_DATA=/app/fugu_mt/cache/
export ALLENNLP_CACHE_ROOT=/app/fugu_mt/cache/
# ライブラリをimport
from fugumt.tojpn import FuguJPNTranslator
from fugumt.misc import make_marian_process
from fugumt.misc import close_marian_process
# marian processを作成
marian_processes = make_marian_process("/app/marian/build/marian-server",
[["-p","8001","-c","model/model.npz.decoder.yml", "--log", "log/marian8001.log
[8001])
3. translate_textが返す値は下記の構造となっています。複数の文が入力された場合はlistにappendされます。 訳抜け防止モードの説明は「
機械翻訳と訳抜けとConstituency parsing 」をご参照下さい。
本ソフトウェアは下記のライブラリ・ソフトウェアを利用しています。 またDockerfileに記載の通り、ubuntuで使用可能なパッケージを多数
使用しています。 OSSとして素晴らしいソフトウェアを公開された方々に感謝いたします。
Marian-NMT (MIT-License): https://github.com/marian-nmt/marian
SentencePiece(Apache-2.0 License): https://github.com/google/sentencepiece
NLTK (Apache License Version 2.0): https://www.nltk.org/
MeCab (BSDライセンス): https://taku910.github.io/mecab/
mecab-python3 (Like MeCab itself, mecab-python3 is copyrighted free software by Taku Kudo [email protected] and Nippon Telegraph
and Telephone Corporation, and is distributed under a 3-clause BSD license ): https://github.com/SamuraiT/mecab-python3
unidic-lite(BSD license): https://github.com/polm/unidic-lite
bottle (MIT-License): https://bottlepy.org/docs/dev/
gunicorn (MIT License): https://github.com/benoitc/gunicorn
tensorflow (Apache 2.0): https://github.com/tensorflow/tensorflow
Universal Sentence Encoder (Apache 2.0): https://tfhub.dev/google/universal-sentence-encoder/3
allennlp (Apache 2.0):https://github.com/allenai/allennlp , AllenNLP: A Deep Semantic Natural Language Processing Platform
spacy (MIT License): https://spacy.io/
pdfminer (MIT-License): https://github.com/euske/pdfminer
websocket-client (BSD-3-Clause License): https://github.com/websocket-client/websocket-client
psutil(BSD-3-Clause License): https://github.com/giampaolo/psutil
timeout-decorator (MIT License): https://github.com/pnpnpn/timeout-decorator
bootstrap(MIT-License) : https://getbootstrap.com/
jquery(MIT-License): https://jquery.com/
DataTables(MIT-License): https://datatables.net/
本ソフトウェアは研究用を目的に公開しています。 作者(Satoshi Takahashi)は本ソフトウェアの動作を保証せず、本ソフトウェアを使用し
て発生したあらゆる結果について一切の責任を負いません。 本ソフトウェア(Code)はMIT-Licenseです。
モデル作成では上記ソフトウェアに加え、下記のデータセット・ソフトウェアを使用しています。 オープンなライセンスでソフトウェア・デ
ータセットを公開された方々に感謝いたします。
Beautiful Soap (MIT License): https://www.crummy.com/software/BeautifulSoup/
feedparser (BSD License): https://github.com/kurtmckee/feedparser
LaBSE (Apache 2.0): https://tfhub.dev/google/LaBSE/
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. Language-agnostic BERT Sentence Embedding. July 2020
Japanese-English Subtitle Corpus (CC BY-SA 4.0): https://nlp.stanford.edu/projects/jesc/
Pryzant, R. and Chung, Y. and Jurafsky, D. and Britz, D., JESC: Japanese-English Subtitle Corpus, Language Resources and
Evaluation Conference (LREC), 2018
京都フリー翻訳タスク (KFTT) (CC BY-SA 3.0): http://www.phontron.com/kftt/index-ja.html
Graham Neubig, "The Kyoto Free Translation Task," http://www.phontron.com/kftt, 2011.
Tanaka Corpus (CC BY 2.0 FR):http://www.edrdg.org/wiki/index.php/Tanaka_Corpus
# 翻訳
fgmt = FuguJPNTranslator([8001])
translated = fgmt.translate_text("This is a Fugu machine translator.")
print(translated)
# marian processをクローズ
close_marian_process(marian_processes)
[{'best_is_norm': 1, # 通常翻訳のスコアが良い場合は1、訳抜け防止モードが良い場合は0
'en': 'This is a Fugu machine translator.', # 入力された英文
'ja_best': 'ふぐ機械翻訳機。', # スコアが一番良かった日本語訳
'ja_best_score': 0.2991045981645584, # 上記スコア
'ja_norm': 'ふぐ機械翻訳機。', # 通常翻訳で一番良かった日本語訳
'ja_norm_score': 0.2991045981645584, # 上記スコア
'ja_parse': 'ふぐ機械翻訳機。', # 訳抜け防止モードで一番良かった日本語訳
'ja_parse_score': 0.2991045981645584 # 上記スコア
}]
謝辞・ライセンス
Professor Tanaka originally placed the Corpus in the Public Domain, and that status was maintained for the versions used by
WWWJDIC. In late 2009 the Tatoeba Project decided to move it to a Creative Commons CC-BY licence (that project is in France,
where the concept of public domain is not part of the legal framework.) It can be freely downloaded and used provided the
source is attributed.
JSNLI (CC BY-SA 4.0):http://nlp.ist.i.kyoto-u.ac.jp/index.php?
%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E
3%83%88
吉越 卓見, 河原 大輔, 黒橋 禎夫: 機械翻訳を用いた自然言語推論データセットの多言語化, 第244回自然言語処理研究会, (2020.7.3).
WikiMatrix (Creative Commons Attribution-ShareAlike license):https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in
1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
Tatoeba (CC BY 2.0 FR): https://tatoeba.org/jpn
https://tatoeba.org TatoebaのデータはCC-BY 2.0 FRで提供されています。
CCAligned (No claims of intellectual property are made on the work of preparation of the corpus. ): http://www.statmt.org/cc-aligned/
El-Kishky, Ahmed and Chaudhary, Vishrav and Guzm{'a}n, Francisco and Koehn, Philipp, CCAligned: A Massive Collection of Cross-
lingual Web-Document Pairs, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP
2020), 2020
ニューラル機械翻訳モデル「FuguMT model 」は 上記に独自収集データを加えMarian-NMT + SentencePieceで作成しています。 モデル構築
に使用したデータ量は約660万対訳ペア、V100 GPU 1つを用いて約30時間学習しています。
「FuguMT model ver.202011.1」のライセンスはCC BY-SA 4.0 です。fine tuningを行う場合も本モデルのライセンスに従った取扱いをお願い
いたします。 (zipファイル内のreadmeもご確認ください。)
本モデルは研究用を目的に公開しています。 作者(Satoshi Takahashi)は本モデルの動作を保証せず、本モデルを使用して発生したあらゆる
結果について一切の責任を負いません。
※ FuguMT model ver.202011.1ではTatoeba、CCAlignedは使用しておらず、ver.202101.1以降のモデルで使用予定です。
※ 出典を書く際はBlogのURL記載またはリンクをお願いします。https://staka jp/wordpress/
|
Fugu-Machine Translator
====
[ぷるーふおぶこんせぷと](https://staka.jp/wordpress/?p=413)
で公開した機械翻訳エンジンを利用する翻訳環境です。
フォームに入力された文字列の翻訳、PDFの翻訳が可能です。
あくまで検証用の環境・ソフトウェアであり、公開サーバ上での実行を想定したものではありません。
(上記BLOGで公開されているWEB環境で用いているソフトウェアではありません。)
Usage
----
### 翻訳サーバの実行
Dockerがセットアップされている場合、下記のように実行できます。
1. git clone後、model/ 以下に「[FuguMT model](https://fugumt.com/FuguMT_ver.202011.1.zip) 」で配布されているモデルをダウンロード、展開
- ``git clone http://github.com/s-taka/fugumt``
- ``wget https://fugumt.com/FuguMT_ver.202011.1.zip``
- ``shasum FuguMT_ver.202011.1.zip``
- ハッシュ値が 0cf8a1fc540b4c7b4388b75b71858c0eb32e392a であることを確認
- ``unzip FuguMT_ver.202011.1.zip``
- 解凍した場所から移動 ``mv model/* fugumt/model``
2. Docker環境を構築
- ``cd fugumt/docker``
- ``docker build -t fugu_mt .``
3. コンテナを実行
- ``docker run -v /path_to/fugumt/:/app/fugu_mt -p 127.0.0.1:8888:8080 -it --user `id -u`:`id -g` --rm fugu_mt
python3 /app/fugu_mt/run.py /app/fugu_mt/config.json``
- 「/path_to」は環境に合わせて変更してください。git cloneを行った先のディレクトリを指定する必要があります。
- Load completeと表示されたら実行ができています。
実行後、http://localhost:8888/
にアクセスすることで翻訳エンジンを利用可能です。
http://localhost:8888/pdf_upload/
からPDFの翻訳を行うことができます。
パスワードはconfig.jsonの"auth_info"から設定可能です。
デフォルトではpdf/pdfとなっています。
本ソフトウェアは信頼できるネットワーク上での実行を前提に利用してください。
### pdfの翻訳
翻訳サーバの実行の2.まで構築が終わっていれば、環境変数を設定し、コマンドラインからPDFを翻訳することもできます。
1. Dockerコンテナ起動
* ``docker run -v /path_to/fugumt/:/app/fugu_mt -it --user `id -u`:`id -g` --rm fugu_mt bash``
2. 環境変数を設定、カレントディレクトリの変更
```shell
export TFHUB_CACHE_DIR=/app/fugu_mt/cache/
export NLTK_DATA=/app/fugu_mt/cache/
export ALLENNLP_CACHE_ROOT=/app/fugu_mt/cache/
cd /app/fugu_mt/
```
3. コマンド実行
* ``python3 /app/fugu_mt/pdf_server.py --pdf Dockerコンテナ上のPDFパス
--out Dockerコンテナ上のpickle保存場所
--out_html Dockerコンテナ上のHTML保存場所
--mk_process 1
/app/fugu_mt/config.json``
### marian-decoderの実行
より簡易にモデルを試す場合は以下の手順でテストが可能です。
Docker build、モデルのダウンロードは「翻訳サーバの実行」と同じです。
* ``docker run -v /path_to/fugumt/:/app/fugu_mt -it --user `id -u`:`id -g` --rm fugu_mt
bash``
* 「/path_to」は環境に合わせて変更してください。git cloneを行った先のディレクトリを指定する必要があります。
* ``cd /app/fugu_mt``
* ``echo "Fugu MT model" | /app/marian/build/marian-decoder -c model/model.npz.decoder.yml``
下記のように_uncasedを指定すると、大文字・小文字を無視した翻訳を行います。
* ``echo "Fugu MT model" | /app/marian/build/marian-decoder -c model/model_uncased.npz.decoder.yml``
### fugumtライブラリの実行
ライブラリを通した翻訳は下記のように実行することができます。
Docker build、モデルのダウンロードは「翻訳サーバの実行」と同じです。
1. 環境変数を設定
```shell
export TFHUB_CACHE_DIR=/app/fugu_mt/cache/
export NLTK_DATA=/app/fugu_mt/cache/
export ALLENNLP_CACHE_ROOT=/app/fugu_mt/cache/
```
2. pythonコードを実行(python3を実行後に入力)
```python
# ライブラリをimport
from fugumt.tojpn import FuguJPNTranslator
from fugumt.misc import make_marian_process
from fugumt.misc import close_marian_process
# marian processを作成
marian_processes = make_marian_process("/app/marian/build/marian-server",
[["-p","8001","-c","model/model.npz.decoder.yml", "--log", "log/marian8001.log"]],
[8001])
# 翻訳
fgmt = FuguJPNTranslator([8001])
translated = fgmt.translate_text("This is a Fugu machine translator.")
print(translated)
# marian processをクローズ
close_marian_process(marian_processes)
```
3. translate_textが返す値は下記の構造となっています。複数の文が入力された場合はlistにappendされます。
訳抜け防止モードの説明は「 [
機械翻訳と訳抜けとConstituency parsing](https://staka.jp/wordpress/?p=357) 」をご参照下さい。
```python
[{'best_is_norm': 1, # 通常翻訳のスコアが良い場合は1、訳抜け防止モードが良い場合は0
'en': 'This is a Fugu machine translator.', # 入力された英文
'ja_best': 'ふぐ機械翻訳機。', # スコアが一番良かった日本語訳
'ja_best_score': 0.2991045981645584, # 上記スコア
'ja_norm': 'ふぐ機械翻訳機。', # 通常翻訳で一番良かった日本語訳
'ja_norm_score': 0.2991045981645584, # 上記スコア
'ja_parse': 'ふぐ機械翻訳機。', # 訳抜け防止モードで一番良かった日本語訳
'ja_parse_score': 0.2991045981645584 # 上記スコア
}]
```
謝辞・ライセンス
----
本ソフトウェアは下記のライブラリ・ソフトウェアを利用しています。
またDockerfileに記載の通り、ubuntuで使用可能なパッケージを多数使用しています。
OSSとして素晴らしいソフトウェアを公開された方々に感謝いたします。
* Marian-NMT (MIT-License): https://github.com/marian-nmt/marian
* SentencePiece(Apache-2.0 License): https://github.com/google/sentencepiece
* NLTK (Apache License Version 2.0): https://www.nltk.org/
* MeCab (BSDライセンス): https://taku910.github.io/mecab/
* mecab-python3 (Like MeCab itself, mecab-python3 is copyrighted free software by Taku Kudo [email protected] and Nippon Telegraph and Telephone Corporation, and is distributed under a 3-clause BSD license ): https://github.com/SamuraiT/mecab-python3
* unidic-lite(BSD license): https://github.com/polm/unidic-lite
* bottle (MIT-License): https://bottlepy.org/docs/dev/
* gunicorn (MIT License): https://github.com/benoitc/gunicorn
* tensorflow (Apache 2.0): https://github.com/tensorflow/tensorflow
* Universal Sentence Encoder (Apache 2.0): https://tfhub.dev/google/universal-sentence-encoder/3
* allennlp (Apache 2.0):https://github.com/allenai/allennlp , [AllenNLP: A Deep Semantic Natural Language Processing Platform](https://www.semanticscholar.org/paper/AllenNLP%3A-A-Deep-Semantic-Natural-Language-Platform-Gardner-Grus/a5502187140cdd98d76ae711973dbcdaf1fef46d)
* spacy (MIT License): https://spacy.io/
* pdfminer (MIT-License): https://github.com/euske/pdfminer
* websocket-client (BSD-3-Clause License): https://github.com/websocket-client/websocket-client
* psutil(BSD-3-Clause License): https://github.com/giampaolo/psutil
* timeout-decorator (MIT License): https://github.com/pnpnpn/timeout-decorator
* bootstrap(MIT-License) : https://getbootstrap.com/
* jquery(MIT-License): https://jquery.com/
* DataTables(MIT-License): https://datatables.net/
本ソフトウェアは研究用を目的に公開しています。
作者(Satoshi Takahashi)は本ソフトウェアの動作を保証せず、本ソフトウェアを使用して発生したあらゆる結果について一切の責任を負いません。
本ソフトウェア(Code)はMIT-Licenseです。
モデル作成では上記ソフトウェアに加え、下記のデータセット・ソフトウェアを使用しています。
オープンなライセンスでソフトウェア・データセットを公開された方々に感謝いたします。
* Beautiful Soap (MIT License): https://www.crummy.com/software/BeautifulSoup/
* feedparser (BSD License): https://github.com/kurtmckee/feedparser
* LaBSE (Apache 2.0): https://tfhub.dev/google/LaBSE/
* Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. Language-agnostic BERT Sentence Embedding. July 2020
* Japanese-English Subtitle Corpus (CC BY-SA 4.0): https://nlp.stanford.edu/projects/jesc/
* Pryzant, R. and Chung, Y. and Jurafsky, D. and Britz, D.,
JESC: Japanese-English Subtitle Corpus,
Language Resources and Evaluation Conference (LREC), 2018
* 京都フリー翻訳タスク (KFTT) (CC BY-SA 3.0): http://www.phontron.com/kftt/index-ja.html
* Graham Neubig, "The Kyoto Free Translation Task," http://www.phontron.com/kftt, 2011.
* Tanaka Corpus (CC BY 2.0 FR):http://www.edrdg.org/wiki/index.php/Tanaka_Corpus
* > Professor Tanaka originally placed the Corpus in the Public Domain, and that status was maintained for the versions used by WWWJDIC. In late 2009 the Tatoeba Project decided to move it to a Creative Commons CC-BY licence (that project is in France, where the concept of public domain is not part of the legal framework.) It can be freely downloaded and used provided the source is attributed.
* JSNLI (CC BY-SA 4.0):http://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88
* 吉越 卓見, 河原 大輔, 黒橋 禎夫: 機械翻訳を用いた自然言語推論データセットの多言語化, 第244回自然言語処理研究会, (2020.7.3).
* WikiMatrix (Creative Commons Attribution-ShareAlike license):https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix
* Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
* Tatoeba (CC BY 2.0 FR): https://tatoeba.org/jpn
* > https://tatoeba.org TatoebaのデータはCC-BY 2.0 FRで提供されています。
* CCAligned (No claims of intellectual property are made on the work of preparation of the corpus. ): http://www.statmt.org/cc-aligned/
* El-Kishky, Ahmed and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Koehn, Philipp,
CCAligned: A Massive Collection of Cross-lingual Web-Document Pairs,
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), 2020
ニューラル機械翻訳モデル「[FuguMT model](http://plant-check.jp:8080/static/FuguMT_ver.202011.1.zip) 」は
上記に独自収集データを加えMarian-NMT + SentencePieceで作成しています。
モデル構築に使用したデータ量は約660万対訳ペア、V100 GPU 1つを用いて約30時間学習しています。
「FuguMT model ver.202011.1」のライセンスは[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja)
です。fine tuningを行う場合も本モデルのライセンスに従った取扱いをお願いいたします。
(zipファイル内のreadmeもご確認ください。)
本モデルは研究用を目的に公開しています。
作者(Satoshi Takahashi)は本モデルの動作を保証せず、本モデルを使用して発生したあらゆる結果について一切の責任を負いません。
※ FuguMT model ver.202011.1ではTatoeba、CCAlignedは使用しておらず、ver.202101.1以降のモデルで使用予定です。
※ 出典を書く際はBlogのURL記載またはリンクをお願いします。
https://staka.jp/wordpress/
|
[
"Machine Translation",
"Multilinguality",
"Text Generation"
] |
[] |
true |
https://github.com/wtnv-lab/tweetMapping
|
2021-02-16T22:54:22Z
|
東日本大震災発生から24時間以内につぶやかれたジオタグ付きツイートのデジタルアーカイブです。
|
wtnv-lab / tweetMapping
Public
Branches
Tags
Go to file
Go to file
Code
Cesium
css
data
font
js
.htaccess
CNAME
LICEN…
READ…
index.h…
東日本大震災発生から24時間以内につぶやかれたジオ
タグ付きツイートのデジタルアーカイブです。
https://tweet.mapping.jp/で公開中です。
全ジオタグ付きツイートから,@付き,BOTによ
るものなどを削除した5765件を,位置情報をある
程度ずらした上でマッピングしています。
このコンテンツは,2012年に開催された「東日本
大震災ビッグデータワークショップ」の成果物を
Cesiumに移植したものです。
About
東日本大震災発生から24時間以内
につぶやかれたジオタグ付きツイー
トのデジタルアーカイブです。
tweet.mapping.jp/
Readme
MIT license
Activity
Custom properties
25 stars
5 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
JavaScript 97.1%
CSS 2.2%
Other 0.7%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
東日本大震災ツイートマッ
ピング
README
MIT license
東京大学大学院 渡邉英徳研究室が作成・運営して
います。
ソースコードはコメントを参考に改修の上,自由に
お使いください。
お問い合わせは hwtnv(at)iii.u-tokyo.ac.jp まで
|
# 東日本大震災ツイートマッピング
東日本大震災発生から24時間以内につぶやかれたジオタグ付きツイートのデジタルアーカイブです。
- [https://tweet.mapping.jp/](https://tweet.mapping.jp/)で公開中です。
- 全ジオタグ付きツイートから,@付き,BOTによるものなどを削除した5765件を,位置情報をある程度ずらした上でマッピングしています。
- このコンテンツは,2012年に開催された「[東日本大震災ビッグデータワークショップ](https://sites.google.com/site/prj311/)」の成果物をCesiumに移植したものです。
- [東京大学大学院 渡邉英徳研究室](https://labo.wtnv.jp/)が作成・運営しています。
- ソースコードはコメントを参考に改修の上,自由にお使いください。
- お問い合わせは hwtnv(at)iii.u-tokyo.ac.jp まで
|
[] |
[
"Annotation and Dataset Development"
] |
true |
https://github.com/t-sagara/jageocoder
|
2021-02-20T08:31:55Z
|
Pure Python Japanese address geocoder
|
t-sagara / jageocoder
Public
Branches
Tags
Go to file
Go to file
Code
.vscode
docs
flask-demo
itaiji_dic
jageocoder
tests
.gitignore
.readthedocs.yaml
LICENSE
Pipfile
README.md
README_ja.md
SECURITY.md
poetry.lock
pyproject.toml
requirements.txt
setup.cfg
setup.py
日本語版は README_ja.md をお読みください。
This is a Python port of the Japanese-address geocoder DAMS used in CSIS at the University of Tokyo's "Address Matching Service" and GSI
Maps.
This package provides address-geocoding and reverse-geocoding functionality for Python programs. The basic usage is to specify a dictionary
with init() then call search() to get geocoding results.
About
Japanese address geocoder that
works both offline and online.
t-sagara.github.io/jageocoder/
# python # geocoding # address
Readme
View license
Security policy
Activity
64 stars
4 watching
7 forks
Report repository
Releases 26
v2.1.8
Latest
last month
+ 25 releases
Packages
No packages published
Contributors
4
Languages
Python 92.2%
HTML 7.8%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Jageocoder - A Python Japanese geocoder
Getting Started
>>> import jageocoder
>>> jageocoder.init(url='https://jageocoder.info-proto.com/jsonrpc')
>>> jageocoder.search('新宿区西新宿2-8-1')
{'matched': '新宿区西新宿2-8-', 'candidates': [{'id': 5961406, 'name': '8番', 'x': 139.691778, 'y': 35.689627, 'level':
README
License
Security
Requires Python 3.8 or later.
All other required packages will be installed automatically.
Install the package with pip install jageocoder
To use Jageocoder, you need to install the "Dictionary Database" on the same machine or connect to the RPC service provided by jageocoder-
server .
When a dictionary database is installed, large amounts of data can be processed at high speed. A database covering addresses in Japan
requires 25 GB or more of storage.
Download an address database file compatible with that version from here
Install the dictionary with install-dictionary command
If you need to know the location of the dictionary directory, perform get-db-dir command as follows. (Or call jageocoder.get_db_dir() in
your script)
If you prefer to create the database in another location, set the environment variable JAGEOCODER_DB2_DIR before executing
install_dictionary() to specify the directory.
Since dictionary databases are large in size, installing them on multiple machines consumes storage and requires time and effort to update
them. Instead of installing a dictionary database on each machine, you can connect to a Jageocoder server to perform the search process.
If you want to use a server, specify the server endpoint in the environment variable JAGEOCODER_SERVER_URL . For a public demonstration
server, use the following
However, the server for public demonstrations cannot withstand the load when accesses are concentrated, so it is limited to one request per
second. If you want to process a large number of requests, please refer to here to set up your own Jageocoder server. The endpoint is
'/jsonrpc' on the server.
Remove the directory containing the database, or perform uninstall-dictionary command as follows.
Then, uninstall the package with pip command.
How to install
Prerequisites
Install instructions
Install Dictionary Database
jageocoder download-dictionary https://www.info-proto.com/static/jageocoder/latest/v2/jukyo_all_v21.zip
jageocoder install-dictionary jukyo_all_v21.zip
jageocoder get-db-dir
export JAGEOCODER_DB2_DIR='/usr/local/share/jageocoder/db2'
install-dictionary <db-file>
Connect to the Jageocoder server
export JAGEOCODER_SERVER_URL=https://jageocoder.info-proto.com/jsonrpc
Uninstall instructions
jageocoder uninstall-dictionary
Jageocoder is intended to be embedded in applications as a library and used by calling the API, but a simple command line interface is also
provided.
For example, to geocode an address, execute the following command.
You can check the list of available commands with --help .
First, import jageocoder and initialize it with init() .
The parameter db_dir of init() can be used to specify the directory where the address database is installed. Alternatively, you can specify
the endpoint URL of the Jageocoder server with url . If it is omitted, the value of the environment variable is used.
Use search() to search for the address you want to check the longitude and latitude of.
The search() function returns a dict with matched as the matched string and candidates as the list of search results. (The results are
formatted for better viewing)
Each element of candidates contains the information of an address node (AddressNode).
The meaning of the items is as follows
id: ID in the database
name: Address notation
x: longitude
y: latitude
level: Address level (1:Prefecture, 2:County, 3:City and 23 district, 4:Ward, 5:Oaza, 6:Aza and Chome, 7:Block, 8:Building)
note: Notes such as city codes
fullname: List of address notations from the prefecture level to this node
You can specify the latitude and longitude of a point and look up the address of that point (so-called reverse geocoding).
When you pass the longitude and latitude of the point you wish to look up to reverse() , you can retrieve up to three address nodes
surrounding the specified point.
pip uninstall jageocoder
How to use
Use from the command line
jageocoder search 新宿区西新宿2-8-1
jageocoder --help
Using API
>>> import jageocoder
>>> jageocoder.init()
Search for latitude and longitude by address
>>> jageocoder.search('新宿区西新宿2-8-1')
{
'matched': '新宿区西新宿2-8-',
'candidates': [{
'id': 12299846, 'name': '8番',
'x': 139.691778, 'y': 35.689627, 'level': 7, 'note': None,
'fullname': ['東京都', '新宿区', '西新宿', '二丁目', '8番']
}]
}
Search for addresses by longitude and latitude
In the example above, the level optional parameter is set to 7 to search down to the block (街区・地番) level.
Note
Indexes for reverse geocoding are automatically created the first time you perform reverse geocoding. Note that this process can take a
long time.
Use searchNode() to retrieve information about an address.
This function returns a list of type jageocoder.result.Result . You can access the address node from node element of the Result object.
You can use the as_geojson() method of the Result and AddressNode objects to obtain the GeoJSON representation.
There are two types of local government codes: JISX0402 (5-digit) and Local Government Code (6-digit).
You can also obtain the prefecture code JISX0401 (2 digits).
Generate URLs to link to GSI and Google maps.
A "parent node" is a node that represents a level above the address. Get the node by attribute parent .
Now the node points to '8番', so the parent node will be '二丁目'.
>>> import jageocoder
>>> jageocoder.init()
>>> triangle = jageocoder.reverse(139.6917, 35.6896, level=7)
>>> if len(triangle) > 0:
... print(triangle[0]['candidate']['fullname'])
...
['東京都', '新宿区', '西新宿', '二丁目', '8番']
Explore the attribute information of an address
>>> results = jageocoder.searchNode('新宿区西新宿2-8-1')
>>> len(results)
1
>>> results[0].matched
'新宿区西新宿2-8-'
>>> type(results[0].node)
<class 'jageocoder.node.AddressNode'>
>>> node = results[0].node
>>> node.get_fullname()
['東京都', '新宿区', '西新宿', '二丁目', '8番']
Get GeoJSON representation
>>> results[0].as_geojson()
{'type': 'Feature', 'geometry': {'type': 'Point', 'coordinates': [139.691778, 35.689627]}, 'properties': {'id': 12299
>>> results[0].node.as_geojson()
{'type': 'Feature', 'geometry': {'type': 'Point', 'coordinates': [139.691778, 35.689627]}, 'properties': {'id': 12299
Get the local government codes
>>> node.get_city_jiscode() # 5-digit code
'13104'
>>> node.get_city_local_authority_code() # 6-digit code
'131041'
>>> node.get_pref_jiscode() # prefecture code
'13'
Get link URLs to maps
>>> node.get_gsimap_link()
'https://maps.gsi.go.jp/#16/35.689627/139.691778/'
>>> node.get_googlemap_link()
'https://maps.google.com/maps?q=35.689627,139.691778&z=16'
Traverse the parent node
A "child node" is a node that represents a level below the address. Get the node by attribute children .
There is one parent node, but there are multiple child nodes. The actual return is a SQL query object, but it can be looped through with an
iterator or cast to a list.
Now the parent points to '二丁目', so the child node will be the block number (○番) contained therein.
Tutorials and references are here.
Consider using jageocoder-converter.
Address notation varies. So suggestions for logic improvements are welcome. Please submit an issue with examples of address notations in
use and how they should be parsed.
Takeshi SAGARA - Info-proto Co.,Ltd.
This project is licensed under the MIT License.
This is not the scope of the dictionary data license. Please follow the license of the respective dictionary data.
We would like to thank CSIS for allowing us to provide address matching services on their institutional website for over 20 years.
>>> parent = node.parent
>>> parent.get_fullname()
['東京都', '新宿区', '西新宿', '二丁目']
>>> parent.x, parent.y
(139.691774, 35.68945)
Traverse the child nodes
>>> parent.children
<sqlalchemy.orm.dynamic.AppenderQuery object at 0x7fbc08404b38>
>>> [child.name for child in parent.children]
['10番', '11番', '1番', '2番', '3番', '4番', '5番', '6番', '7番', '8番', '9番']
For developers
Documentation
Create your own dictionary
Contributing
Authors
License
Acknowledgements
|
# Jageocoder - A Python Japanese geocoder
`Jageocoder` は日本の住所用ジオコーダです。
東京大学空間情報科学研究センターの
[「CSV アドレスマッチングサービス」](https://geocode.csis.u-tokyo.ac.jp/home/csv-admatch/) および国土地理院の [「地理院地図」](https://maps.gsi.go.jp/) で利用している C++ ジオコーダ `DAMS` を Python に移植しました。
# はじめに
このパッケージは Python プログラムに住所ジオコーディングと逆ジオコーディング機能を提供します。
`init()` で初期化し、 `search()` に住所文字列を渡すと、ジオコーディング結果が得られます。
```python
>>> import jageocoder
>>> jageocoder.init(url='https://jageocoder.info-proto.com/jsonrpc')
>>> jageocoder.search('新宿区西新宿2-8-1')
{'matched': '新宿区西新宿2-8-', 'candidates': [{'id': 5961406, 'name': '8番', 'x': 139.691778, 'y': 35.689627, 'level': 7, 'note': None, 'fullname': ['東京都', '新宿区', '西新宿', '二丁目', '8番']}]}
```
# インストール方法
## 事前準備
Python 3.7 以降が動作する環境が必要です。
その他の依存パッケージは自動的にインストールされます。
## インストール手順
`pip install jageocoder` でパッケージをインストールします
```
pip install jageocoder
```
Jageocoder を利用するには、同一マシン上に「辞書データベース」をインストールするか、 [jageocoder-server](https://t-sagara.github.io/jageocoder/server/) が提供する RPC サービスに接続する必要があります。
### 辞書データベースをインストールする場合
辞書データベースをインストールすると大量のデータも高速に処理できます。全国の住所を網羅するデータベースは 30GB 以上のストレージが必要です。
- 利用する辞書データベースファイルを [ここから](https://www.info-proto.com/static/jageocoder/latest/v2/) ダウンロードします
wget https://www.info-proto.com/static/jageocoder/latest/v2/jukyo_all_v21.zip
- 辞書データベースをインストールします
jageocoder install-dictionary jukyo_all_v21.zip
辞書データベースが作成されたディレクトリを知る必要がある場合、
以下のように `get-db-dir` コマンドを実行するか、スクリプト内で
`jageocoder.get_db_dir()` メソッドを呼びだしてください。
```bash
jageocoder get-db-dir
```
上記以外の任意の場所に作成したい場合、住所辞書をインストールする前に
環境変数 `JAGEOCODER_DB2_DIR` でディレクトリを指定してください。
```bash
export JAGEOCODER_DB2_DIR='/usr/local/share/jageocoder/db2'
jageocoder install-dictionary jukyo_all_v21.zip
```
### Jageocoder サーバに接続する場合
辞書データベースはサイズが大きいので、複数のマシンにインストールするとストレージを消費しますし、更新の手間もかかります。
そこで各マシンに辞書データベースをインストールする代わりに、Jageocoder サーバに接続して検索処理を代行させることもできます。
サーバを利用したい場合、環境変数 `JAGEOCODER_SERVER_URL` にサーバの
エンドポイントを指定してください。
公開デモンストレーション用サーバの場合は次の通りです。
```bash
export JAGEOCODER_SERVER_URL=https://jageocoder.info-proto.com/jsonrpc
```
ただし公開デモンストレーション用サーバはアクセスが集中すると負荷に耐えられないため、1秒1リクエストまでに制限しています。
大量の処理を行いたい場合は [こちら](https://t-sagara.github.io/jageocoder/server/) を参照して独自 Jageocoder サーバを設置してください。エンドポイントはサーバの `/jsonrpc` になります。
## アンインストール手順
アンインストールする場合、まず辞書データベースを含むディレクトリを
削除してください。ディレクトリごと削除しても構いませんが、
`uninstall-dictionary` コマンドも利用できます。
```bash
jageocoder uninstall-dictionary
```
その後、 jageocoder パッケージを pip でアンインストールしてください。
```bash
pip uninstall jageocoder
```
# 使い方
## コマンドラインから利用する
Jageocoder はライブラリとしてアプリケーションに組み込み、API を呼びだして利用することを想定していますが、簡単なコマンドラインインタフェースも用意しています。
たとえば住所をジオコーディングしたい場合は次のコマンドを実行します。
```bash
jageocoder search 新宿区西新宿2-8-1
```
利用可能なコマンド一覧は `--help` で確認してください。
```bash
jageocoder --help
```
## APIを利用する
まず jageocoder をインポートし、 `init()` で初期化します。
```python
>>> import jageocoder
>>> jageocoder.init()
```
`init()` のパラメータ `db_dir` で住所データベースがインストールされているディレクトリを指定できます。あるいは `url` で Jageocoder サーバのエンドポイント URL を指定できます。省略された場合は環境変数の値を利用します。
```python
>>> jageocoder.init(db_dir='/path/to/the/database')
>>> jageocoder.init(url='https://your.jageocoder.server/jsonrpc')
```
### 住所から経緯度を調べる
経緯度を調べたい住所を `search()` で検索します。
`search()` は一致した文字列を `matched` に、検索結果のリストを
`candidates` に持つ dict を返します。 `candidates` の各要素には
住所ノード (AddressNode) の情報が入っています
(見やすくするために表示結果を整形しています)。
```python
>>> jageocoder.search('新宿区西新宿2-8-1')
{
'matched': '新宿区西新宿2-8-',
'candidates': [{
'id': 12299846, 'name': '8番',
'x': 139.691778, 'y': 35.689627, 'level': 7, 'note': None,
'fullname': ['東京都', '新宿区', '西新宿', '二丁目', '8番']
}]
}
```
項目の意味は次の通りです。
- id: データベース内での ID
- name: 住所表記
- x: 経度
- y: 緯度
- level: 住所レベル(1:都道府県, 2:郡/振興局, 3:市町村・23特別区,
4:政令市の区, 5:大字, 6:字・丁目, 7:街区・地番, 8:住居番号・枝番)
- note: メモ(自治体コードなど)
- fullname: 都道府県レベルからこのノードまでの住所表記のリスト
### 経緯度から住所を調べる
地点の経緯度を指定し、その地点の住所を調べることができます
(いわゆるリバースジオコーディング)。
`reverse()` に調べたい地点の経度と緯度を渡すと、指定した地点を囲む最大3点の住所ノードを検索できます。
```python
>>> import jageocoder
>>> jageocoder.init()
>>> triangle = jageocoder.reverse(139.6917, 35.6896, level=7)
>>> if len(triangle) > 0:
... print(triangle[0]['candidate']['fullname'])
...
['東京都', '新宿区', '西新宿', '二丁目', '8番']
```
上の例では ``level`` オプションパラメータに 7 を指定して、街区・地番レベルまで検索しています。
> [!Note]
>
> リバースジオコーディング用のインデックスは、初めてリバースジオコーディングを実行した時に自動的に作成されます。
この処理には長い時間がかかりますので、注意してください。
### 住所の属性情報を調べる
住所に関する情報を取得するには `searchNode()` を使います。
この関数は `jageocoder.result.Result` 型のリストを返します。
Result オブジェクトの node 要素から住所ノードにアクセスできます。
```python
>>> results = jageocoder.searchNode('新宿区西新宿2-8-1')
>>> len(results)
1
>>> results[0].matched
'新宿区西新宿2-8-'
>>> type(results[0].node)
<class 'jageocoder.node.AddressNode'>
>>> node = results[0].node
>>> node.get_fullname()
['東京都', '新宿区', '西新宿', '二丁目', '8番']
```
#### GeoJSON 表現を取得する
Result および AddressNode オブジェクトの `as_geojson()` メソッドを
利用すると GeoJSON 表現を取得できます。
```python
>>> results[0].as_geojson()
{'type': 'Feature', 'geometry': {'type': 'Point', 'coordinates': [139.691778, 35.689627]}, 'properties': {'id': 12299851, 'name': '8番', 'level': 7, 'note': None, 'fullname': ['東京都', '新宿区', '西新宿', '二丁目', '8番'], 'matched': '新宿区西新宿2-8-'}}
>>> results[0].node.as_geojson()
{'type': 'Feature', 'geometry': {'type': 'Point', 'coordinates': [139.691778, 35.689627]}, 'properties': {'id': 12299851, 'name': '8番', 'level': 7, 'note': None, 'fullname': ['東京都', '新宿区', '西新宿', '二丁目', '8番']}}
```
#### 自治体コードを取得する
自治体コードには JISX0402(5桁)と地方公共団体コード(6桁)があります。
都道府県コード JISX0401(2桁)も取得できます。
```python
>>> node.get_city_jiscode() # 5桁コード
'13104'
>>> node.get_city_local_authority_code() # 6桁コード
'131041'
>>> node.get_pref_jiscode() # 都道府県コード
'13'
```
#### 地図へのリンクを取得する
地理院地図と Google 地図へのリンク URL を生成します。
```python
>>> node.get_gsimap_link()
'https://maps.gsi.go.jp/#16/35.689627/139.691778/'
>>> node.get_googlemap_link()
'https://maps.google.com/maps?q=35.689627,139.691778&z=16'
```
#### 親ノードを辿る
「親ノード」とは、住所の一つ上の階層を表すノードのことです。
ノードの属性 `parent` で取得します。
今 `node` は '8番' を指しているので、親ノードは '二丁目' になります。
```python
>>> parent = node.parent
>>> parent.get_fullname()
['東京都', '新宿区', '西新宿', '二丁目']
>>> parent.x, parent.y
(139.691774, 35.68945)
```
#### 子ノードを辿る
「子ノード」とは、住所の一つ下の階層を表すノードのことです。
ノードの属性 `children` で取得します。
親ノードは一つですが、子ノードは複数あります。
実際に返すのは SQL クエリオブジェクトですが、
イテレータでループしたり list にキャストできます。
今 `parent` は '二丁目' を指しているので、子ノードは
そこに含まれる街区符号(○番)になります。
```python
>>> parent.children
<sqlalchemy.orm.dynamic.AppenderQuery object at 0x7fbc08404b38>
>>> [child.name for child in parent.children]
['10番', '11番', '1番', '2番', '3番', '4番', '5番', '6番', '7番', '8番', '9番']
```
# 開発者向け情報
## Documentation
チュートリアルやリファレンスは [こちら](https://jageocoder.readthedocs.io/ja/latest/) 。
## 独自の辞書を作成したい場合
辞書コンバータ [jageocoder-converter](https://github.com/t-sagara/jageocoder-converter) を利用してください。Version 2.x 系列の辞書を作成するには、 jageocoder-converter も 2.x 以降を利用する必要があります。
## 異体字を追加したい場合
特定の文字を別の文字に読み替えたい場合、異体字辞書に登録します。
詳細は `itaiji_dic/README.md` を参照してください。
## サンプルウェブアプリ
Flask を利用したシンプルなウェブアプリのサンプルが
`flask-demo` の下にあります。
次の手順を実行し、ポート 5000 にアクセスしてください。
```bash
cd flask-demo
pip install flask flask-cors
bash run.sh
```
## ご協力頂ける場合
日本の住所表記は非常に多様なので、うまく変換できない場合にはお知らせ頂けるとありがたいです。ロジックの改良提案も歓迎します。どのような住所がどう解析されるべきかをご連絡頂ければ幸いです。
## 作成者
* **相良 毅** - [株式会社情報試作室](https://www.info-proto.com/)
## 利用許諾条件
MIT ライセンスでご利用頂けます。
This project is licensed under [the MIT License](https://opensource.org/licenses/mit-license.php).
ただしこの利用許諾条件は住所辞書データに対しては適用されません。
各辞書データのライセンスを参照してください。
## 謝辞
20年以上にわたり、アドレスマッチングサービスを継続するために所内ウェブサーバを利用させて頂いている東京大学空間情報科学研究センターに感謝いたします。
また、NIIの北本朝展教授には、比較的古い住所体系を利用している地域の
大規模なサンプルのご提供と、解析結果の確認に多くのご協力を頂きましたことを感謝いたします。
|
[
"Structured Data in NLP",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/hppRC/jawiki-cleaner
|
2021-02-21T11:43:43Z
|
Japanese Wikipedia Cleaner
|
hppRC / jawiki-cleaner
Public
Branches
Tags
Go to file
Go to file
Code
src/jawiki_cleaner
tests
.gitignore
README.md
poetry.lock
pyproject.toml
Split sentences at the proper position taking parentheses into account.
Normalize Unicode characters by NFKC.
Extract text from wiki links
Remove of unnecessary symbols.
Apply this tool for a extracted text by WikiExtractor.
About
🧹Japanese Wikipedia Cleaner 🧹
pypi.org/project/jawiki-cleaner/
# nlp # wikipedia # python3
Readme
Activity
6 stars
3 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
1
Actions
Projects
Security
Insights
Japanese Wikipedia Cleaner
Instralltion
pip install jawiki-cleaner
Usage
# specify a input file path and output file path
jawiki-cleaner --input ./wiki.txt --output ./cleaned-wiki.txt
jawiki-cleaner -i ./wiki.txt -o ./cleaned-wiki.txt
# or only specify a input file path
# output file path will be `./cleaned-wiki.txt`
jawiki-cleaner -i ./wiki.txt
Example
Before
<doc id="5" url="?curid=5" title="アンパサンド">
アンパサンド
アンパサンド (&、英語名:) とは並立助詞「…と…」を意味する記号である。ラテン語の の合字で、Trebuchet MSフォントでは、と表示され "
歴史.
その使用は1世紀に遡ることができ、5世紀中葉から現代に至るまでの変遷がわかる。
Z に続くラテン文字アルファベットの27字目とされた時期もある。
アンパサンドと同じ役割を果たす文字に「のet」と呼ばれる、数字の「7」に似た記号があった(, U+204A)。この記号は現在もゲール文字で使われてい
記号名の「アンパサンド」は、ラテン語まじりの英語「& はそれ自身 "and" を表す」(& per se and) のくずれた形である。英語以外の言
手書き.
日常的な手書きの場合、欧米でアンパサンドは「ε」に縦線を引く単純化されたものが使われることがある。
また同様に、「t」または「+(プラス)」に輪を重ねたような、無声歯茎側面摩擦音を示す発音記号「」のようなものが使われることもある。
README
</doc>
<doc id="11" url="?curid=11" title="日本語">
日本語
...
学校図書を除く四社の教科書では、単文節でできているものを「主語」のように「-語」と呼び、連文節でできているものを「主部」のように「-部」文文文大
種類とその役割.
以下、学校文法の区分に従いつつ、それぞれの文の成分の種類と役割とについて述べる。
主語・述語.
文を成り立たせる基本的な成分である。ことに述語は、文をまとめる重要な役割を果たす。「雨が降る。」「本が多い。」「私は学生だ。」などは、いずれも主・立文一方止
</doc>
Run jawiki-cleaner -i wiki.txt
After
アンパサンド(&、英語名)とは並立助詞「...と...」を意味する記号である。
ラテン語の の合字で、Trebuchet MSフォントでは、と表示され "et" の合字であることが容易にわかる。
ampersa、すなわち "and per se and"、その意味は"and [the symbol which] by itself [is] and"である。
その使用は1世紀に遡ることができ、5世紀中葉から現代に至るまでの変遷がわかる。
Z に続くラテン文字アルファベットの27字目とされた時期もある。
アンパサンドと同じ役割を果たす文字に「のet」と呼ばれる、数字の「7」に似た記号があった(U+204A)。
この記号は現在もゲール文字で使われている。
記号名の「アンパサンド」は、ラテン語まじりの英語「& はそれ自身 "and" を表す」(& per se and)のくずれた形である。
英語以外の言語での名称は多様である。
日常的な手書きの場合、欧米でアンパサンドは「ε」に縦線を引く単純化されたものが使われることがある。
また同様に、「t」または「+(プラス)」に輪を重ねたような、無声歯茎側面摩擦音を示す発音記号のようなものが使われることもある。
学校図書を除く四社の教科書では、単文節でできているものを「主語」のように「-語」と呼び、連文節でできているものを「主部」のように「-部」
と呼んでいる。
それに対し学校図書だけは、文節/連文節どうしの関係概念を「-語」と呼び、いわゆる成分(文を構成する個々の最大要素)を「-部」と呼んでいる。
以下、学校文法の区分に従いつつ、それぞれの文の成分の種類と役割とについて述べる。
文を成り立たせる基本的な成分である。
ことに述語は、文をまとめる重要な役割を果たす。
「雨が降る。」「本が多い。」「私は学生だ。」などは、いずれも主語・述語から成り立っている。
教科書によっては、述語を文のまとめ役として最も重視する一方、主語については修飾語と併せて説明するものもある(前節「主語廃止論」参照)。
|
# Japanese Wikipedia Cleaner
- Split sentences at the proper position taking parentheses into account.
- Normalize Unicode characters by NFKC.
- Extract text from wiki links
- Remove of unnecessary symbols.
Apply this tool for a extracted text by WikiExtractor.
## Instralltion
```bash
pip install jawiki-cleaner
```
## Usage
```bash
# specify a input file path and output file path
jawiki-cleaner --input ./wiki.txt --output ./cleaned-wiki.txt
jawiki-cleaner -i ./wiki.txt -o ./cleaned-wiki.txt
# or only specify a input file path
# output file path will be `./cleaned-wiki.txt`
jawiki-cleaner -i ./wiki.txt
```
## Example
### Before
```txt:wiki.txt
<doc id="5" url="?curid=5" title="アンパサンド">
アンパサンド
アンパサンド (&、英語名:) とは並立助詞「…と…」を意味する記号である。ラテン語の の合字で、Trebuchet MSフォントでは、と表示され "et" の合字であることが容易にわかる。ampersa、すなわち "and per se and"、その意味は"and [the symbol which] by itself [is] and"である。
歴史.
その使用は1世紀に遡ることができ、5世紀中葉から現代に至るまでの変遷がわかる。
Z に続くラテン文字アルファベットの27字目とされた時期もある。
アンパサンドと同じ役割を果たす文字に「のet」と呼ばれる、数字の「7」に似た記号があった(, U+204A)。この記号は現在もゲール文字で使われている。
記号名の「アンパサンド」は、ラテン語まじりの英語「& はそれ自身 "and" を表す」(& per se and) のくずれた形である。英語以外の言語での名称は多様である。
手書き.
日常的な手書きの場合、欧米でアンパサンドは「ε」に縦線を引く単純化されたものが使われることがある。
また同様に、「t」または「+(プラス)」に輪を重ねたような、無声歯茎側面摩擦音を示す発音記号「」のようなものが使われることもある。
</doc>
<doc id="11" url="?curid=11" title="日本語">
日本語
...
学校図書を除く四社の教科書では、単文節でできているものを「主語」のように「-語」と呼び、連文節でできているものを「主部」のように「-部」と呼んでいる。それに対し学校図書だけは、文節/連文節どうしの関係概念を「-語」と呼び、いわゆる成分(文を構成する個々の最大要素)を「-部」と呼んでいる。
種類とその役割.
以下、学校文法の区分に従いつつ、それぞれの文の成分の種類と役割とについて述べる。
主語・述語.
文を成り立たせる基本的な成分である。ことに述語は、文をまとめる重要な役割を果たす。「雨が降る。」「本が多い。」「私は学生だ。」などは、いずれも主語・述語から成り立っている。教科書によっては、述語を文のまとめ役として最も重視する一方、主語については修飾語と併せて説明するものもある(前節「主語廃止論」参照)。
</doc>
```
### Run `jawiki-cleaner -i wiki.txt`
### After
```
アンパサンド(&、英語名)とは並立助詞「...と...」を意味する記号である。
ラテン語の の合字で、Trebuchet MSフォントでは、と表示され "et" の合字であることが容易にわかる。
ampersa、すなわち "and per se and"、その意味は"and [the symbol which] by itself [is] and"である。
その使用は1世紀に遡ることができ、5世紀中葉から現代に至るまでの変遷がわかる。
Z に続くラテン文字アルファベットの27字目とされた時期もある。
アンパサンドと同じ役割を果たす文字に「のet」と呼ばれる、数字の「7」に似た記号があった(U+204A)。
この記号は現在もゲール文字で使われている。
記号名の「アンパサンド」は、ラテン語まじりの英語「& はそれ自身 "and" を表す」(& per se and)のくずれた形である。
英語以外の言語での名称は多様である。
日常的な手書きの場合、欧米でアンパサンドは「ε」に縦線を引く単純化されたものが使われることがある。
また同様に、「t」または「+(プラス)」に輪を重ねたような、無声歯茎側面摩擦音を示す発音記号のようなものが使われることもある。
学校図書を除く四社の教科書では、単文節でできているものを「主語」のように「-語」と呼び、連文節でできているものを「主部」のように「-部」と呼んでいる。
それに対し学校図書だけは、文節/連文節どうしの関係概念を「-語」と呼び、いわゆる成分(文を構成する個々の最大要素)を「-部」と呼んでいる。
以下、学校文法の区分に従いつつ、それぞれの文の成分の種類と役割とについて述べる。
文を成り立たせる基本的な成分である。
ことに述語は、文をまとめる重要な役割を果たす。
「雨が降る。」「本が多い。」「私は学生だ。」などは、いずれも主語・述語から成り立っている。
教科書によっては、述語を文のまとめ役として最も重視する一方、主語については修飾語と併せて説明するものもある(前節「主語廃止論」参照)。
```
|
[
"Information Extraction & Text Mining",
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/izuna385/jel
|
2021-03-08T16:14:17Z
|
Japanese Entity Linker.
|
izuna385 / jel
Public
8 Branches
1 Tag
Go to file
Go to file
Code
izuna385 Merge pull request #21 from izuna385/feature/addAPIPredictionMode
3bce15d · 3 years ago
docs
add logo
3 years ago
jel
enable API mode
3 years ago
scripts
fix for predictor
3 years ago
tests
add test for prediction
3 years ago
.dockerignore
enable fast image build
3 years ago
.gitignore
remove unnecessary data
3 years ago
Dockerfile
enable API mode.
3 years ago
LICENSE
add License
3 years ago
MANIFEST.in
fix for pypi
3 years ago
README.md
enable API mode
3 years ago
pytest.ini
add test config
3 years ago
requirements.txt
fix env setup and pypi package …
3 years ago
setup.py
fix version
3 years ago
jel - Japanese Entity Linker - is Bi-encoder based entity linker for japanese.
Currently, link and question methods are supported.
This returnes named entity and its candidate ones from Wikipedia titles.
About
Japanese Entity Linker.
pypi.org/project/jel/
# python # natural-language-processing
# transformers # pytorch # question-answering
# entity-linking # allennlp # jel
Readme
Apache-2.0 license
Activity
11 stars
1 watching
1 fork
Report repository
Releases 1
v0.1.1 release
Latest
on May 29, 2021
Packages
No packages published
Languages
Python 99.0%
Dockerfile 1.0%
Code
Issues
3
Pull requests
Actions
Projects
Security
Insights
jel: Japanese Entity Linker
Usage
el.link
from jel import EntityLinker
el = EntityLinker()
el.link('今日は東京都のマックにアップルを買いに行き、スティーブジョブスとドナルドに会い、堀田区に引っ越した。')
README
Apache-2.0 license
This returnes candidate entity for any question from Wikipedia titles.
>> [
{
"text": "東京都",
"label": "GPE",
"span": [
3,
6
],
"predicted_normalized_entities": [
[
"東京都庁",
0.1084
],
[
"東京",
0.0633
],
[
"国家地方警察東京都本部",
0.0604
],
[
"東京都",
0.0598
],
...
]
},
{
"text": "アップル",
"label": "ORG",
"span": [
11,
15
],
"predicted_normalized_entities": [
[
"アップル",
0.2986
],
[
"アップル インコーポレイテッド",
0.1792
],
…
]
}
el.question
>>> linker.question('日本の総理大臣は?')
[('菅内閣', 0.05791765857101555), ('枢密院', 0.05592481946602986), ('党', 0.05430194711042564), ('総選挙', 0.05279540066
Setup
$ pip install jel
$ python -m spacy download ja_core_news_md
Run as API
$ uvicorn jel.api.server:app --reload --port 8000 --host 0.0.0.0 --log-level trace
Example
# link
$ curl localhost:8000/link -X POST -H "Content-Type: application/json" \
-d '{"sentence": "日本の総理は菅総理だ。"}'
$ python pytest
faiss==1.5.3 from pip causes error _swigfaiss.
To solve this, see this issue.
Apache 2.0 License.
# question
$ curl localhost:8000/question -X POST -H "Content-Type: application/json" \
-d '{"sentence": "日本で有名な総理は?"}
Test
Notes
LICENSE
CITATION
@INPROCEEDINGS{manabe2019chive,
author = {真鍋陽俊, 岡照晃, 海川祥毅, 髙岡一馬, 内田佳孝, 浅原正幸},
title = {複数粒度の分割結果に基づく日本語単語分散表現},
booktitle = "言語処理学会第25回年次大会(NLP2019)",
year = "2019",
pages = "NLP2019-P8-5",
publisher = "言語処理学会",
}
|
<p align="center"><img width="20%" src="docs/jel-logo.png"></p>
# jel: Japanese Entity Linker
* jel - Japanese Entity Linker - is Bi-encoder based entity linker for japanese.
# Usage
* Currently, `link` and `question` methods are supported.
## `el.link`
* This returnes named entity and its candidate ones from Wikipedia titles.
```python
from jel import EntityLinker
el = EntityLinker()
el.link('今日は東京都のマックにアップルを買いに行き、スティーブジョブスとドナルドに会い、堀田区に引っ越した。')
>> [
{
"text": "東京都",
"label": "GPE",
"span": [
3,
6
],
"predicted_normalized_entities": [
[
"東京都庁",
0.1084
],
[
"東京",
0.0633
],
[
"国家地方警察東京都本部",
0.0604
],
[
"東京都",
0.0598
],
...
]
},
{
"text": "アップル",
"label": "ORG",
"span": [
11,
15
],
"predicted_normalized_entities": [
[
"アップル",
0.2986
],
[
"アップル インコーポレイテッド",
0.1792
],
…
]
}
```
## `el.question`
* This returnes candidate entity for any question from Wikipedia titles.
```python
>>> linker.question('日本の総理大臣は?')
[('菅内閣', 0.05791765857101555), ('枢密院', 0.05592481946602986), ('党', 0.05430194711042564), ('総選挙', 0.052795400668513175)]
```
## Setup
```
$ pip install jel
$ python -m spacy download ja_core_news_md
```
## Run as API
```
$ uvicorn jel.api.server:app --reload --port 8000 --host 0.0.0.0 --log-level trace
```
### Example
```
# link
$ curl localhost:8000/link -X POST -H "Content-Type: application/json" \
-d '{"sentence": "日本の総理は菅総理だ。"}'
# question
$ curl localhost:8000/question -X POST -H "Content-Type: application/json" \
-d '{"sentence": "日本で有名な総理は?"}
```
## Test
`$ python pytest`
## Notes
* faiss==1.5.3 from pip causes error _swigfaiss.
* To solve this, see [this issue](https://github.com/facebookresearch/faiss/issues/821#issuecomment-573531694).
## LICENSE
Apache 2.0 License.
## CITATION
```
@INPROCEEDINGS{manabe2019chive,
author = {真鍋陽俊, 岡照晃, 海川祥毅, 髙岡一馬, 内田佳孝, 浅原正幸},
title = {複数粒度の分割結果に基づく日本語単語分散表現},
booktitle = "言語処理学会第25回年次大会(NLP2019)",
year = "2019",
pages = "NLP2019-P8-5",
publisher = "言語処理学会",
}
```
|
[
"Information Extraction & Text Mining",
"Knowledge Representation",
"Relation Extraction",
"Semantic Text Processing"
] |
[] |
true |
https://github.com/shihono/alphabet2kana
|
2021-03-21T03:27:37Z
|
Convert English alphabet to Katakana
|
shihono / alphabet2kana
Public
Branches
Tags
Go to file
Go to file
Code
.github
alphab…
tests
.gitignore
LICEN…
READ…
poetry.l…
pyproj…
pypi
pypi v0.1.6
v0.1.6 python
python 3.7 | 3.8 | 3.9 | 3.10 | 3.11
3.7 | 3.8 | 3.9 | 3.10 | 3.11
Convert English alphabet to Katakana
アルファベットの日本語表記は Unidic と 英語アルファベット -
Wikipedia を参考にしています。
特に、Z は ゼット 表記です。
About
Convert English alphabet to
Katakana
pypi.org/project/alphabet2kana/
# katakana # japanese
Readme
MIT license
Activity
13 stars
1 watching
0 forks
Report repository
Releases 2
Release v0.1.6
Latest
on Oct 9, 2023
+ 1 release
Packages
No packages published
Contributors
3
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
alphabet2kana
Installation
pip install alphabet2kana
README
MIT license
半角にのみ対応しています。 全角アルファベットは mojimoji や jaconv
などで半角に変換してください。
Usage
from alphabet2kana import a2k
a2k("ABC")
# "エービーシー"
a2k("Alphabetと日本語")
# "エーエルピーエイチエービーイーティーと日本語"
a2k("Alphabetと日本語", delimiter="・")
# "エー・エル・ピー・エイチ・エー・ビー・イー・ティーと日本語"
a2k('k8s', delimiter='・', numeral=True)
# "ケー・エイト・エス"
|
# alphabet2kana


Convert English alphabet to Katakana
アルファベットの日本語表記は [Unidic](https://unidic.ninjal.ac.jp/)
と [英語アルファベット - Wikipedia](https://ja.wikipedia.org/wiki/%E8%8B%B1%E8%AA%9E%E3%82%A2%E3%83%AB%E3%83%95%E3%82%A1%E3%83%99%E3%83%83%E3%83%88) を参考にしています。
特に、`Z` は `ゼット` 表記です。
## Installation
```bash
pip install alphabet2kana
```
## Usage
```python
from alphabet2kana import a2k
a2k("ABC")
# "エービーシー"
a2k("Alphabetと日本語")
# "エーエルピーエイチエービーイーティーと日本語"
a2k("Alphabetと日本語", delimiter="・")
# "エー・エル・ピー・エイチ・エー・ビー・イー・ティーと日本語"
a2k('k8s', delimiter='・', numeral=True)
# "ケー・エイト・エス"
```
半角にのみ対応しています。
全角アルファベットは [mojimoji](https://github.com/studio-ousia/mojimoji) や [jaconv](https://github.com/ikegami-yukino/jaconv)
などで半角に変換してください。
Only supported with half-width characters.
|
[
"Paraphrasing",
"Syntactic Text Processing",
"Text Normalization"
] |
[] |
true |
https://github.com/yagays/manbyo-sudachi
|
2021-04-05T12:12:21Z
|
Sudachi向け万病辞書
|
yagays / manbyo-sudachi
Public
Branches
Tags
Go to file
Go to file
Code
SudachiPy @ …
config
data
tests
util @ 8f6947f
.gitignore
.gitmodules
LICENSE
README.md
build_dic.sh
convert.py
download_ma…
manbyo20190…
manbyo20190…
万病辞書を形態素解析器Sudachiで利用する - Out-of-the-box
万病辞書MANBYO_201907 から、Sudachiのユーザ辞書形式に変換したファイル
manbyo20190704_sabc_dic.txt : 万病辞書で信頼度LEVELがS,A,B,Cの病名のみ
manbyo20190704_all_dic.txt : 万病辞書のすべての病名
About
No description, website, or topics
provided.
Readme
CC-BY-4.0 license
Activity
7 stars
2 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 92.5%
Shell 7.5%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Sudachi向け万病辞書
配布辞書
使い方
1. レポジトリをcloneする
README
CC-BY-4.0 license
BASE_PATH を自身の環境のsystem.dic が置かれている場所に変更して、下記コマンドを実行。
$ git clone --recursive https://github.com/yagays/manbyo-sudachi
2. sudachipyをインストールする
$ pip install sudachipy
3. バイナリ辞書をビルドする
BASE_PATH=/path/to/sudachidict/resources/
sudachipy ubuild -s $BASE_PATH/system.dic manbyo20190704_sabc_dic.txt -o user_manbyo_sabc.dic
sudachipy ubuild -s $BASE_PATH/system.dic manbyo20190704_all_dic.txt -o user_manbyo_all.dic
|
# Sudachi向け万病辞書
[万病辞書を形態素解析器Sudachiで利用する \- Out\-of\-the\-box](https://yag-ays.github.io/project/manbyo-sudachi/)
## 配布辞書
万病辞書`MANBYO_201907`から、Sudachiのユーザ辞書形式に変換したファイル
- `manbyo20190704_sabc_dic.txt`: 万病辞書で信頼度LEVELがS,A,B,Cの病名のみ
- `manbyo20190704_all_dic.txt`: 万病辞書のすべての病名
## 使い方
### 1. レポジトリをcloneする
```sh
$ git clone --recursive https://github.com/yagays/manbyo-sudachi
```
### 2. sudachipyをインストールする
```sh
$ pip install sudachipy
```
### 3. バイナリ辞書をビルドする
`BASE_PATH`を自身の環境の`system.dic`が置かれている場所に変更して、下記コマンドを実行。
```sh
BASE_PATH=/path/to/sudachidict/resources/
sudachipy ubuild -s $BASE_PATH/system.dic manbyo20190704_sabc_dic.txt -o user_manbyo_sabc.dic
sudachipy ubuild -s $BASE_PATH/system.dic manbyo20190704_all_dic.txt -o user_manbyo_all.dic
```
|
[
"Morphology",
"Syntactic Text Processing",
"Tagging",
"Text Segmentation"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/gecko655/proofreading-tool
|
2021-04-08T03:06:36Z
|
GUIで動作する文書校正ツール GUI tool for textlinting.
|
gecko655 / proofreading-tool
Public
5 Branches
7 Tags
Go to file
Go to file
Code
gecko655 Merge pull request #66 f…
4452ebd · 5 months ago
.github…
use yarn
5 months ago
assets/…
preload.jsを…
3 years ago
build
support arm64
last year
config
proofreading-t…
3 years ago
lib
proofreading-t…
3 years ago
script/t…
proofreading-t…
3 years ago
textlint…
lint fix
3 years ago
.babelrc
proofreading-t…
3 years ago
.eslintr…
lint fix
3 years ago
.gitattri…
proofreading-t…
3 years ago
.gitignore
proofreading-t…
3 years ago
LICEN…
Create LICEN…
3 years ago
READ…
use yarn
5 months ago
index.h…
mark.jsもvue…
3 years ago
main.js
Use @nosfer…
2 years ago
packag…
release 1.0.10
last year
preloa…
preload.jsを…
3 years ago
webpa…
mark.jsもvue…
3 years ago
yarn.lock
use yarn
5 months ago
About
GUIで動作する文書校正ツール GUI
tool for textlinting.
gecko655.hatenablog.com/entry/…
# proofreading
Readme
GPL-3.0 license
Activity
86 stars
4 watching
8 forks
Report repository
Releases 6
1.0.11
Latest
on Mar 14, 2023
+ 5 releases
Packages
No packages published
Contributors
2
gecko655 YAMAMORI, Akihiro
dependabot[bot]
Languages
JavaScript 78.1%
HTML 20.7%
Shell 1.2%
Code
Issues
1
Pull requests
3
Actions
Projects
Security
Insights
README
GPL-3.0 license
Build/release
Build/release
no status
no status downloads
downloads 2.5k
2.5k (Download count not included for v1.0.4 or earlier
versions)
GUIで動作する文書校正ツール GUI tool for textlinting.
https://gecko655.hatenablog.com/entry/proofreading-tool
https://github.com/gecko655/proofreading-tool/releases
proofreading-tool
Usage
Install
How to build
Prepare
# Fetch dependencies
yarn install
# Build webpack
yarn webpack # or `npm run webpack-prod` or `npm run webpack-watch`
The build artifacts should be located under the dist/ folder.
Edit package.json to update version number.
Push tag with the same version number with prefix 'v'.
GitHub Action creates a draft release
Release the draft.
This software is released under GPLv3 LICENSE.
This software uses xpdf(pdftotext), which is released under GPLv3 license.
https://github.com/mixigroup
Debug
yarn start
Test
yarn lint # or `npm run lint:fix` (prettier fixes the code format)
Build for production
yarn webpack-prod
yarn dist:mac # or `npm run dist:win`
Release
git tag vX.Y.Z
git push --tags
LICENSE
special thanks
|
proofreading-tool
===
[](https://github.com/gecko655/proofreading-tool/actions/workflows/electron-release.yml)

(Download count not included for v1.0.4 or earlier versions)

GUIで動作する文書校正ツール
GUI tool for textlinting.
# Usage
https://gecko655.hatenablog.com/entry/proofreading-tool
# Install
https://github.com/gecko655/proofreading-tool/releases
# How to build
## Prepare
```bash
# Fetch dependencies
yarn install
# Build webpack
yarn webpack # or `npm run webpack-prod` or `npm run webpack-watch`
```
## Debug
```bash
yarn start
```
## Test
```bash
yarn lint # or `npm run lint:fix` (prettier fixes the code format)
```
## Build for production
```bash
yarn webpack-prod
yarn dist:mac # or `npm run dist:win`
```
The build artifacts should be located under the `dist/` folder.
## Release
- Edit package.json to update version number.
- Push tag with the same version number with prefix 'v'.
```bash
git tag vX.Y.Z
git push --tags
```
- [GitHub Action](https://github.com/gecko655/proofreading-tool/actions) creates a [draft release](https://github.com/gecko655/proofreading-tool/releases)
- Release the draft.
# LICENSE
This software is released under [GPLv3 LICENSE](LICENSE).
This software uses [xpdf(pdftotext)](https://www.xpdfreader.com/), which is released under GPLv3 license.
# special thanks
https://github.com/mixigroup
|
[
"Natural Language Interfaces",
"Syntactic Text Processing",
"Text Error Correction",
"Text Normalization"
] |
[] |
true |
https://github.com/kotofurumiya/genshin-dict
|
2021-05-04T13:59:24Z
|
Windows/macOSで使える原神の単語辞書です
|
kotofurumiya / genshin-dict
Public
1 Branch
48 Tags
Go to file
Go to file
Code
kotofurumiya fix: duplicate 波しぶきのエラ
2a81084 · 2 months ago
.github/w…
docs
genshin-d…
scripts
worddata
.gitignore
.prettierrc…
CONTRI…
LICENSE
README…
package-l…
package.j…
tsconfig.j…
原神の日本語入力用辞書です。 人名、地名、装備名などをカバーしています。
登録データについては登録単語の一覧をご覧ください。
About
Windows/macOSで使える原神の単
語辞書です
Readme
Zlib license
Activity
104 stars
7 watching
7 forks
Report repository
Releases 47
v5.0.1
Latest
on Sep 16
+ 46 releases
Packages
No packages published
Contributors
3
kotofurumiya Koto Furumiya
masayatoy
xicri Xicri
Languages
TypeScript 99.9%
JavaScript 0.1%
Code
Issues
Pull requests
3
Actions
Security
Insights
原神辞書(Windows/macOS)
登録データ
README
Zlib license
以下のページより genshin-dictionary.zip をダウンロードしてください。 zipファイルを展開すると
それぞれの環境用のファイルが入っています。
https://github.com/kotofurumiya/genshin-dict/releases/latest
IME
対応ファイル
備考
Windows標準
原神辞書_Windows.txt
macOSユーザ辞書
原神辞書_macOS_ユーザ辞書.plist
macOS追加辞書
原神辞書_macOS.txt
iPhoneユーザ辞書
mac経由で追加可能(後述)
Google IME
原神辞書_Windows.txt
macでもWindows用ファイルで追加可能
タスクバーの右のほうにあるIMEアイコンを右クリックします。 IMEアイコンは「A」とか「あ」みたい
な表示になっていると思います。
「単語の追加」を選びます。
ダウンロード
対応IME
利用方法(Windows)
単語の登録ウィンドウが開くので、左下の「ユーザ辞書ツール」をクリックします。
ユーザ辞書ツールが開くので、「ツール」から「テキストファイルからの登録」を選びます。 ここでダウ
ンロードしたファイルの中にある「原神辞書_Windows.txt」を選択します。
あとは自動的に登録されます。
macでは2つの方法で辞書を登録できます。
ユーザ辞書として扱う
普通の単語登録と同じ
ユーザ辞書に単語が大量に並んでしまう
iCloud同期ができる(macで登録するとiPhoneでも使える)
追加辞書として扱う
あとで一括削除できる
iCloudで同期できない(iPhoneで使えない)
まずはiPhoneでも同時に使える、ユーザ辞書として扱う方法を紹介します。
macの設定から「キーボード」を開き、「ユーザ辞書」タブを選択します。
利用方法(macOSとiPhoneを同期)
ダウンロードしたファイルの中にある「原神辞書_macOS_ユーザ辞書.plist」を左の単語欄にドラッグ&
ドロップします。
これで登録は完了です。
次に追加辞書として扱う方法を紹介します。 この方法では設定がスッキリしますが、iPhoneと同期でき
ません。
macの設定から「キーボード」を開き、「入力ソース」タブを選択します。
利用方法(macOSのみ)
左から「日本語」を選びそのまま下にスクロールすると「追加辞書」という項目が見えます。
|
# 原神辞書(Windows/macOS)
[原神](https://genshin.hoyoverse.com/ja/home)の日本語入力用辞書です。
人名、地名、装備名などをカバーしています。
## 登録データ
登録データについては[登録単語の一覧](./docs/dict_data.md)をご覧ください。
## ダウンロード
以下のページより `genshin-dictionary.zip` をダウンロードしてください。
zipファイルを展開するとそれぞれの環境用のファイルが入っています。
https://github.com/kotofurumiya/genshin-dict/releases/latest
## 対応IME
| IME | 対応ファイル | 備考 |
|----------------:|:-------------------------------|:------------------------------|
| Windows標準 | 原神辞書_Windows.txt | |
| macOSユーザ辞書 | 原神辞書_macOS_ユーザ辞書.plist | |
| macOS追加辞書 | 原神辞書_macOS.txt | |
| iPhoneユーザ辞書 | | mac経由で追加可能(後述) |
| Google IME | 原神辞書_Windows.txt | macでもWindows用ファイルで追加可能 |
## 利用方法(Windows)
タスクバーの右のほうにあるIMEアイコンを右クリックします。
IMEアイコンは「A」とか「あ」みたいな表示になっていると思います。

「単語の追加」を選びます。

単語の登録ウィンドウが開くので、左下の「ユーザ辞書ツール」をクリックします。

ユーザ辞書ツールが開くので、「ツール」から「テキストファイルからの登録」を選びます。
ここでダウンロードしたファイルの中にある「原神辞書_Windows.txt」を選択します。

あとは自動的に登録されます。
## 利用方法(macOSとiPhoneを同期)
macでは2つの方法で辞書を登録できます。
- ユーザ辞書として扱う
- 普通の単語登録と同じ
- ユーザ辞書に単語が大量に並んでしまう
- iCloud同期ができる(macで登録するとiPhoneでも使える)
- 追加辞書として扱う
- あとで一括削除できる
- iCloudで同期できない(iPhoneで使えない)
まずはiPhoneでも同時に使える、ユーザ辞書として扱う方法を紹介します。
macの設定から「キーボード」を開き、「ユーザ辞書」タブを選択します。

ダウンロードしたファイルの中にある「原神辞書_macOS_ユーザ辞書.plist」を左の単語欄にドラッグ&ドロップします。
これで登録は完了です。
## 利用方法(macOSのみ)
次に追加辞書として扱う方法を紹介します。
この方法では設定がスッキリしますが、iPhoneと同期できません。
macの設定から「キーボード」を開き、「入力ソース」タブを選択します。

左から「日本語」を選びそのまま下にスクロールすると「追加辞書」という項目が見えます。

右クリックして「辞書をインストール」を選んで、ダウンロードしたファイルの中にある「原神辞書_macOS.txt」を選択します。
これで辞書が利用できるはずです。
## トラブルシューティング
### うまく変換できない
登録したばかりだと優先度が低めだったりで、変換候補としてなかなか出てこなかったりします。
何度も使って学習させましょう。
また単語区切りも適切でないことがあります。「岩王帝君」を変換しようとして「ガン王弟くん」のようになったり……。
この場合は変換しながらShiftキーを押しつつ矢印キーの左右で変換範囲を変えることができるので、うまく調整してみてください。
### iPhone / Androidで使える?
iPhoneはmacOSで同じAppleIDでログインして、ユーザ辞書として登録すれば同期されます。
iPhone単独で一括登録する方法は無さそうです。
Androidはあんまりわかってないです……。手元にPixel4 XLはあるんですが、開発用に持っているだけなので。
## ライセンス
プログラム部分は[Zlibライセンス](./LICENSE)となります。
Zlibライセンスでは「これは自分が作った」と嘘をつかない限り自由な個人利用・商用利用や改変が認められています。
辞書に含まれる単語はすべてmiHoYoのものとなります。
可能な限り個人的な活用にとどめ、商標などに抵触しない範囲でご利用ください。
## Contributors
- 鍾離先生( https://www.youtube.com/watch?v=0Lp5wBSXLMM )
- 古都こと( https://twitter.com/kfurumiya )
|
[] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
true |
https://github.com/ken11/noyaki
|
2021-05-23T09:41:16Z
|
Converts character span label information to tokenized text-based label information.
|
ken11 / noyaki
Public
Branches
Tags
Go to file
Go to file
Code
src/noy…
tests
.gitignore
LICEN…
READ…
setup.py
Converts character span label information to tokenized text-based label
information.
Pass the tokenized text and label information as arguments to the convert
function.
About
No description, website, or topics
provided.
Readme
MIT license
Activity
6 stars
1 watching
0 forks
Report repository
Releases 1
add IOB2 format support
Latest
on Aug 25, 2022
Packages
No packages published
Languages
Python 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
noyaki
Installation
$ pip install noyaki
Usage
import noyaki
label_list = noyaki.convert(
['明日', 'は', '田中', 'さん', 'に', '会う'],
[[3, 5, 'PERSON']]
)
README
MIT license
If you want to remove the subword symbol (eg. ##), specify the subword
argument.
If you want to use IOB2 tag format, specify the scheme argument.
Only Japanese is supported.
supported tag formats are follow:
BILOU
IOB2
print(label_list)
# ['O', 'O', 'U-PERSON', 'O', 'O', 'O']
import noyaki
label_list = noyaki.convert(
['明日', 'は', '田', '##中', 'さん', 'に', '会う'],
[[3, 5, 'PERSON']],
subword="##"
)
print(label_list)
# ['O', 'O', 'B-PERSON', 'L-PERSON', 'O', 'O', 'O']
import noyaki
label_list = noyaki.convert(
['明日', 'は', '田', '##中', 'さん', 'に', '会う'],
[[3, 5, 'PERSON']],
scheme="IOB2"
)
print(label_list)
# ['O', 'O', 'B-PERSON', 'I-PERSON', 'O', 'O', 'O']
Note
|
# noyaki
Converts character span label information to tokenized text-based label information.
## Installation
```sh
$ pip install noyaki
```
## Usage
Pass the tokenized text and label information as arguments to the convert function.
```py
import noyaki
label_list = noyaki.convert(
['明日', 'は', '田中', 'さん', 'に', '会う'],
[[3, 5, 'PERSON']]
)
print(label_list)
# ['O', 'O', 'U-PERSON', 'O', 'O', 'O']
```
If you want to remove the subword symbol (eg. ##), specify the `subword` argument.
```py
import noyaki
label_list = noyaki.convert(
['明日', 'は', '田', '##中', 'さん', 'に', '会う'],
[[3, 5, 'PERSON']],
subword="##"
)
print(label_list)
# ['O', 'O', 'B-PERSON', 'L-PERSON', 'O', 'O', 'O']
```
If you want to use IOB2 tag format, specify the `scheme` argument.
```py
import noyaki
label_list = noyaki.convert(
['明日', 'は', '田', '##中', 'さん', 'に', '会う'],
[[3, 5, 'PERSON']],
scheme="IOB2"
)
print(label_list)
# ['O', 'O', 'B-PERSON', 'I-PERSON', 'O', 'O', 'O']
```
## Note
Only Japanese is supported.
supported tag formats are follow:
- BILOU
- IOB2
|
[
"Chunking",
"Named Entity Recognition",
"Syntactic Text Processing"
] |
[] |
true |
https://github.com/chck/AugLy-jp
|
2021-06-13T09:48:45Z
|
Data Augmentation for Japanese Text on AugLy
|
chck / AugLy-jp
Public
Branches
Tags
Go to file
Go to file
Code
.github
augly_jp
examples
tests
.gitignore
LICENSE
README.md
poetry.lock
pyproject.toml
setup.cfg
Data Augmentation for Japanese Text on AugLy
python
python 3.8 | 3.9
3.8 | 3.9
Test
Test
no status
no status coverage
coverage 83%
83%
Code Quality code style
code style black
black
base_text = "あらゆる現実をすべて自分のほうへねじ曲げたのだ"
Augmenter
Augmented
Description
SynonymAugmenter
あらゆる現実をすべて自身のほうへねじ曲げ
たのだ
Substitute similar word according to Sudachi synonym
WordEmbsAugmenter
あらゆる現実をすべて関心のほうへねじ曲げ
たのだ
Leverage word2vec, GloVe or fasttext embeddings to
apply augmentation
FillMaskAugmenter
つまり現実を、未来な未来まで変えたいんだ
Using masked language model to generate text
BackTranslationAugmenter
そして、ほかの人たちをそれぞれの道に安置
しておられた
Leverage two translation models for augmentation
Software
Install Command
Python 3.8.11
pyenv install 3.8.11
Poetry 1.1.*
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
About
Data Augmentation for Japanese
Text on AugLy
# natural-language-processing # japanese
# data-augmentation # sudachi # ginza
# nlpaug # augly
Readme
MIT license
Activity
7 stars
2 watching
0 forks
Report repository
Releases
8 tags
Languages
Python 100.0%
Code
Issues
4
Pull requests
Actions
Projects
Security
Insights
AugLy-jp
Augmenter
Prerequisites
Get Started
Installation
README
MIT license
Or clone this repository:
https://github.com/facebookresearch/AugLy
https://github.com/makcedward/nlpaug
https://github.com/QData/TextAttack
This software includes the work that is distributed in the Apache License 2.0 [1].
pip install augly-jp
git clone https://github.com/chck/AugLy-jp.git
poetry install
Test with reformat
poetry run task test
Reformat
poetry run task fmt
Lint
poetry run task lint
Inspired
License
|
# AugLy-jp
> Data Augmentation for **Japanese Text** on AugLy
[![PyPI Version][pypi-image]][pypi-url]
[![Python Version][python-image]][python-image]
[![Python Test][test-image]][test-url]
[![Test Coverage][coverage-image]][coverage-url]
[![Code Quality][quality-image]][quality-url]
[![Python Style Guide][black-image]][black-url]
## Augmenter
`base_text = "あらゆる現実をすべて自分のほうへねじ曲げたのだ"`
Augmenter | Augmented | Description
:---:|:---:|:---:
SynonymAugmenter|あらゆる現実をすべて自身のほうへねじ曲げたのだ|Substitute similar word according to [Sudachi synonym](https://github.com/WorksApplications/SudachiDict/blob/develop/docs/synonyms.md)
WordEmbsAugmenter|あらゆる現実をすべて関心のほうへねじ曲げたのだ|Leverage word2vec, GloVe or fasttext embeddings to apply augmentation
FillMaskAugmenter|つまり現実を、未来な未来まで変えたいんだ|Using masked language model to generate text
BackTranslationAugmenter|そして、ほかの人たちをそれぞれの道に安置しておられた|Leverage two translation models for augmentation
## Prerequisites
| Software | Install Command |
|----------------------------|----------------------------|
| [Python 3.8.11][python] | `pyenv install 3.8.11` |
| [Poetry 1.1.*][poetry] | `curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py \| python`|
[python]: https://www.python.org/downloads/release/python-3811/
[poetry]: https://python-poetry.org/
## Get Started
### Installation
```bash
pip install augly-jp
```
Or clone this repository:
```bash
git clone https://github.com/chck/AugLy-jp.git
poetry install
```
### Test with reformat
```bash
poetry run task test
```
### Reformat
```bash
poetry run task fmt
```
### Lint
```bash
poetry run task lint
```
## Inspired
- https://github.com/facebookresearch/AugLy
- https://github.com/makcedward/nlpaug
- https://github.com/QData/TextAttack
## License
This software includes the work that is distributed in the Apache License 2.0 [[1]][apache1-url].
[pypi-image]: https://badge.fury.io/py/augly-jp.svg
[pypi-url]: https://badge.fury.io/py/augly-jp
[python-image]: https://img.shields.io/pypi/pyversions/augly-jp.svg
[test-image]: https://github.com/chck/AugLy-jp/workflows/Test/badge.svg
[test-url]: https://github.com/chck/Augly-jp/actions?query=workflow%3ATest
[coverage-image]: https://img.shields.io/codecov/c/github/chck/AugLy-jp?color=%2334D058
[coverage-url]: https://codecov.io/gh/chck/AugLy-jp
[quality-image]: https://img.shields.io/lgtm/grade/python/g/chck/AugLy-jp.svg?logo=lgtm&logoWidth=18
[quality-url]: https://lgtm.com/projects/g/chck/AugLy-jp/context:python
[black-image]: https://img.shields.io/badge/code%20style-black-black
[black-url]: https://github.com/psf/black
[apache1-url]: https://github.com/cl-tohoku/bert-japanese/blob/v2.0/LICENSE
|
[
"Language Models",
"Low-Resource NLP",
"Representation Learning",
"Text Generation"
] |
[] |
true |
https://github.com/po3rin/hirakanadic
|
2021-06-22T07:41:42Z
|
Allows Sudachi to normalize from hiragana to katakana from any compound word list
|
po3rin / hirakanadic
Public
Branches
Tags
Go to file
Go to file
Code
.github…
example
hiraka…
tests
.gitignore
.pytho…
Makefile
READ…
poetry.l…
pyproj…
pypi
pypi v0.0.4
v0.0.4
PyTest
PyTest
no status
no status python
python 3.7+
3.7+
About
Allows Sudachi to normalize from
hiragana to katakana from any
compound word list
Readme
Activity
7 stars
3 watching
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Python 98.3%
Makefile 1.7%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
hirakanadic
Install
$ pip install hirakanadic
Usage
$ hirakanadic example/input.txt -o out.txt
README
input file
result
コレステロール値
陰のうヘルニア
濾胞性リンパ腫
コリネバクテリウム・ウルセランス感染症
これすてろーる,5146,5146,7000,これすてろーる,
名詞,普通名詞,一般,*,*,*,コレステロール,コレス
テロール,*,*,*,*,*
へるにあ,5146,5146,7000,へるにあ,名詞,普通名
詞,一般,*,*,*,ヘルニア,ヘルニア,*,*,*,*,*
こりねばくてりうむ,5146,5146,7000,こりねばくて
りうむ,名詞,普通名詞,一般,*,*,*,コリネバクテリ
ウム,コリネバクテリウム,*,*,*,*,*
うるせらんす,5146,5146,7000,うるせらんす,名
詞,普通名詞,一般,*,*,*,ウルセランス,ウルセラン
ス,*,*,*,*,*
|
# hirakanadic
[](https://pypi.python.org/pypi/hirakanadic/)

[](https://www.python.org/downloads/release/python-390/)
## Install
```sh
$ pip install hirakanadic
```
## Usage
```sh
$ hirakanadic example/input.txt -o out.txt
```
input file
```
コレステロール値
陰のうヘルニア
濾胞性リンパ腫
コリネバクテリウム・ウルセランス感染症
```
result
```
これすてろーる,5146,5146,7000,これすてろーる,名詞,普通名詞,一般,*,*,*,コレステロール,コレステロール,*,*,*,*,*
へるにあ,5146,5146,7000,へるにあ,名詞,普通名詞,一般,*,*,*,ヘルニア,ヘルニア,*,*,*,*,*
こりねばくてりうむ,5146,5146,7000,こりねばくてりうむ,名詞,普通名詞,一般,*,*,*,コリネバクテリウム,コリネバクテリウム,*,*,*,*,*
うるせらんす,5146,5146,7000,うるせらんす,名詞,普通名詞,一般,*,*,*,ウルセランス,ウルセランス,*,*,*,*,*
```
|
[
"Syntactic Text Processing",
"Term Extraction",
"Text Normalization"
] |
[] |
true |
https://github.com/Netdex/niinii
|
2021-06-26T23:05:50Z
|
Japanese glossator for assisted reading of text using Ichiran
|
Netdex / niinii
Public
Branches
Tags
Go to file
Go to file
Code
ichiran
niinii
openai…
third-p…
.gitignore
.gitmo…
Cargo.l…
Cargo.…
LICEN…
NOTICE
READ…
This project is a work-in-progress.
About
Japanese glossator for assisted
reading of text using Ichiran
# language # translation # dictionary
# japanese # visual-novel # japanese-language
# gloss
Readme
MIT license
Activity
7 stars
2 watching
0 forks
Report repository
Releases
1 tags
Packages
No packages published
Languages
Rust 100.0%
Code
Issues
Pull requests
Actions
Projects
Security
Insights
niinii
README
MIT license
Demonstration
niinii (knee-knee) is a graphical frontend for glossing Japanese
text. Useful for assisted reading of text for language learning
purposes. A primary use case is glossing visual novels, which is
shown in the demonstration above. I made this tool with the
express intent to read a single specific visual novel, which is
also where the name comes from. If someone else finds it
useful that's cool too.
For example, in the demonstration above, I use niinii along with
a text hooker to gloss the dialogue in a visual novel. The
segmented phrase along with ruby text (i.e. furigana) is
displayed. Hovering over a segment will show dictionary
definitions and inflections from JMDict. You can pop open a
separate window by clicking on a segment. Hovering over kanji
will show kanji information from KANJIDIC2. I would write a
more detailed user manual but I think you can probably figure it
out.
Japanese language support is implemented using Ichiran by
tshatrov. Ichiran is pretty amazing at text segmentation
compared to other tools I've tried.
This is a tool created to service a personal need, and may not
be useful to you. Below, I laid out my personal justification for
investing time into creating this tool. If you agree, then this tool
may be useful for you.
Why not use MeCab, JParser, ChaSen, Jisho etc.?: In my
experience ichiran is much better at segmentation, provides
more metadata, and makes fewer mistakes.
Why not use rikai(kun|chan), JGlossator?: They don't do
segmentation.
Why not use DeepL, Google Translate, etc.?: I want a gloss,
not a translation tool. If I ever integrate translation features, I'd
like to do so in a way that supplements the gloss rather than
dumping text.
Why not use the web frontend ichi.moe?: There are some
features I'd like to experiment with to improve the glossing
experience.
Why not use...
Prepackaged builds are available in the Releases section of
this repository.
The only target that is properly maintained is x86_64-pc-
windows-msvc . There's nothing stopping it from working on
other targets (e.g. Linux), but additional work may be required.
To build the application from source on Windows:
If using the hooking feature, the bitness of the DLL must match
the target application. To build a 32-bit application:
For Japanese language support, the following additional
components are required:
ichiran-cli (Ichiran)
PostgreSQL installation with Ichiran database
You must provide the location of these additional components
in the Settings pane of the application.
Given that the process of building these components for use
with niinii is quite involved, prebuilt versions are included with
the prepackaged builds.
Seems like a problem with winit. niinii is almost always used in
the foreground anyways because of always on top, so I'm not
going to bother fixing this.
Build
# install vcpkg
git clone https://github.com/microsoft/vcpkg
.\vcpkg\bootstrap-vcpkg.bat
# install freetype
vcpkg install freetype:x64-windows-static-md
git clone https://github.com/Netdex/niinii.git
cd niinii
cargo build --release
cargo +stable-i686-pc-windows-msvc build --
target i686-pc-windows-msvc --release
Known Issues
High CPU usage when out of focus
Most visual novels are written in engines which use D3D9.
This is not always true though, you may need to adjust the
hooking code as necessary.
There is limited recovery code for when frame buffers are
resized, devices are reset, contexts are changed etc. This
may lead to breakages when switching in and out full-
screen mode, resizing the window, and switching to
another application.
Some visual novel engines will present only when
necessary rather than at a fixed framerate. In this case,
niinii won't work properly since it expects a fixed refresh
rate.
In overlay mode, niinii displays a transparent window which
covers the entire screen. Newer Chromium-based browsers
have a feature which suspends drawing when the window is
fully occluded, for performance reasons. The window displayed
by niinii counts as full occlusion despite being transparent,
which causes Chromium-based browsers to suspend drawing. I
suspect this could also happen with some Electron apps, but I
haven't tested it.
TODO
See NOTICE for a list of third-party software used in this
Hooking not working
Issues with Chromium-based browsers
Troubleshooting
Third-party
|
[
"Multilinguality",
"Representation Learning",
"Syntactic Text Processing"
] |
[
"Vocabulary, Dictionary, and Language Input Method"
] |
|
true |
https://github.com/yagays/ja-timex
|
2021-07-19T12:51:35Z
|
自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器
|
yagays / ja-timex
Public
42 Branches
17 Tags
Go to file
Go to file
Code
yagays bump up version to 0.2.8
5d72377 · last year
.github
docs
ja_timex
tests
tools
.gitignore
LICENSE
README.md
poetry.lock
pyproject.toml
setup.cfg
tox.ini
About
自然言語で書かれた時間情報表現を
抽出/規格化するルールベースの解
析器
# python # nlp # datetime # regular-expression
# temporal # time-parsing
Readme
MIT license
Activity
134 stars
2 watching
9 forks
Report repository
Releases 17
v0.2.8
Latest
on Nov 4, 2023
+ 16 releases
Contributors
4
yagays Yuki Okuda
tpdn
shirayu Yuta Hayashibe
otokunaga2 otokunaga
Languages
Python 100.0%
Code
Issues
2
Pull requests
Actions
Projects
Wiki
Security
Insights
ja-timex
README
License
自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器
ja-timex は、現代日本語で書かれた自然文に含まれる時間情報表現を抽出しTIMEX3 と呼ばれるアノテーショ
ン仕様に変換することで、プログラムが利用できるような形に規格化するルールベースの解析器です。
以下の機能を持っています。
ルールベースによる日本語テキストからの日付や時刻、期間や頻度といった時間情報表現を抽出
アラビア数字/漢数字、西暦/和暦などの多彩なフォーマットに対応
時間表現のdatetime/timedeltaオブジェクトへの変換サポート
ja-timex documentation
本パッケージは、以下の論文で提案されている時間情報アノテーションの枠組みを元に作成しています。
概要
入力
from ja_timex import TimexParser
timexes = TimexParser().parse("彼は2008年4月から週に3回のジョギングを、朝8時から1時間行ってきた")
出力
[<TIMEX3 tid="t0" type="DATE" value="2008-04-XX" text="2008年4月">,
<TIMEX3 tid="t1" type="SET" value="P1W" freq="3X" text="週に3回">,
<TIMEX3 tid="t2" type="TIME" value="T08-XX-XX" text="朝8時">,
<TIMEX3 tid="t3" type="DURATION" value="PT1H" text="1時間">]
datetime/timedeltaへの変換
# <TIMEX3 tid="t0" type="DATE" value="2008-04-XX" text="2008年4月">
In []: timexes[0].to_datetime()
Out[]: DateTime(2008, 4, 1, 0, 0, 0, tzinfo=Timezone('Asia/Tokyo'))
# <TIMEX3 tid="t3" type="DURATION" value="PT1H" text="1時間">
In []: timexes[3].to_duration()
Out[]: Duration(hours=1)
インストール
pip install ja-timex
ドキュメント
参考仕様
[1] 小西光, 浅原正幸, & 前川喜久雄. (2013). 『現代日本語書き言葉均衡コーパス』 に対する時間情報アノテ
ーション. 自然言語処理, 20(2), 201-221.
[2] 成澤克麻 (2014)「自然言語処理における数量表現の取り扱い」東北大学大学院 修士論文
|

# ja-timex
自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器
## 概要
`ja-timex` は、現代日本語で書かれた自然文に含まれる時間情報表現を抽出し`TIMEX3`と呼ばれるアノテーション仕様に変換することで、プログラムが利用できるような形に規格化するルールベースの解析器です。
以下の機能を持っています。
- ルールベースによる日本語テキストからの日付や時刻、期間や頻度といった時間情報表現を抽出
- アラビア数字/漢数字、西暦/和暦などの多彩なフォーマットに対応
- 時間表現のdatetime/timedeltaオブジェクトへの変換サポート
### 入力
```python
from ja_timex import TimexParser
timexes = TimexParser().parse("彼は2008年4月から週に3回のジョギングを、朝8時から1時間行ってきた")
```
### 出力
```python
[<TIMEX3 tid="t0" type="DATE" value="2008-04-XX" text="2008年4月">,
<TIMEX3 tid="t1" type="SET" value="P1W" freq="3X" text="週に3回">,
<TIMEX3 tid="t2" type="TIME" value="T08-XX-XX" text="朝8時">,
<TIMEX3 tid="t3" type="DURATION" value="PT1H" text="1時間">]
```
### datetime/timedeltaへの変換
```python
# <TIMEX3 tid="t0" type="DATE" value="2008-04-XX" text="2008年4月">
In []: timexes[0].to_datetime()
Out[]: DateTime(2008, 4, 1, 0, 0, 0, tzinfo=Timezone('Asia/Tokyo'))
```
```python
# <TIMEX3 tid="t3" type="DURATION" value="PT1H" text="1時間">
In []: timexes[3].to_duration()
Out[]: Duration(hours=1)
```
## インストール
```
pip install ja-timex
```
## ドキュメント
[ja\-timex documentation](https://ja-timex.github.io/docs/)
### 参考仕様
本パッケージは、以下の論文で提案されている時間情報アノテーションの枠組みを元に作成しています。
- [1] [小西光, 浅原正幸, & 前川喜久雄. (2013). 『現代日本語書き言葉均衡コーパス』 に対する時間情報アノテーション. 自然言語処理, 20(2), 201-221.](https://www.jstage.jst.go.jp/article/jnlp/20/2/20_201/_article/-char/ja/)
- [2] [成澤克麻 (2014)「自然言語処理における数量表現の取り扱い」東北大学大学院 修士論文](http://www.cl.ecei.tohoku.ac.jp/publications/2015/mthesis2013_narisawa_submitted.pdf)
|
[
"Information Extraction & Text Mining",
"Named Entity Recognition"
] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.