SymSpell: 1 million times faster spelling correction & fuzzy search through Symmetric Delete spelling correction algorithm
翻译 - SymSpell:通过“对称删除”拼写校正算法快一百万倍
#自然语言处理#Deep Learning Chinese Word Segment
翻译 - 深度学习中文分词
#自然语言处理#"結巴"中文分詞:做最好的 PHP 中文分詞、中文斷詞組件。 / "Jieba" (Chinese for "to stutter") Chinese text segmentation: built to be the best PHP Chinese word segmentation module.
#自然语言处理#Jcseg is a light weight NLP framework developed with Java. Provide CJK and English segmentation based on MMSEG algorithm, With also keywords extraction, key sentence extraction, summary extraction imp...
Python port of SymSpell: 1 million times faster spelling correction & fuzzy search through Symmetric Delete spelling correction algorithm
zhparser is a PostgreSQL extension for full-text search of Chinese language
#自然语言处理# Chinese text segmentation with R. R语言中文分词 (文档已更新 🎉 :https://qinwenfeng.com/jiebaR/ )
Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词、抽取式文本摘要等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label c...
#自然语言处理#HanLP中文分词Lucene插件,支持包括Solr在内的基于Lucene的系统
#搜索#Tokenizer support Lucene5/6/7/8/9+ version, LTS
#计算机科学#利用深度学习实现中文分词
#计算机科学#开源中文分词工具包,中文分词Web API,Lucene中文分词,中英文混合分词
#安卓#Mandarin Chinese text segmentation and mobile dictionary Android app (中文分词)
Chinese Word Segmention Base on the Deep Learning and LSTM Neural Network
#编辑器#為了《中國哲學書電子化計劃》輸入用-加速鍵入與排版,更好的輸入體驗+文房一寶勝四寶C#+WordVBA文史工具-中文博士寫程式
ik-analyzer for rust; chinese tokenizer for tantivy
Jiebago 的性能优化版, 支持从 io.Reader 加载字典
#自然语言处理#An unsupervised Chinese word segmentation tool.
基于 jieba-rs 的中文分词插件
Postgresql with zhparser