Julia version of TinySegmenter, compact Japanese tokenizer
-
Updated
Nov 24, 2020 - Julia
Julia version of TinySegmenter, compact Japanese tokenizer
Serverless web app for ultra-fast generation of highly customizable WordCloud for Japanese
A small experiment using both Mecab and Tinysegmenter to create a tokenized list of Japanese sentences in JSON, taken from the Tatoeba corpus.
A Clojure library to split Japanese into words
A bookmarklet which highlights keywords automatically.
Add a description, image, and links to the tinysegmenter topic page so that developers can more easily learn about it.
To associate your repository with the tinysegmenter topic, visit your repo's landing page and select "manage topics."