#自然语言处理#Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
#自然语言处理#Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)
Pre-trained Transformers for Arabic Language Understanding and Generation (Arabic BERT, Arabic GPT2, Arabic ELECTRA)
#自然语言处理#Pretrained ELECTRA Model for Korean
#自然语言处理#NLP 领域常见任务的实现,包括新词发现、以及基于pytorch的词向量、中文文本分类、实体识别、摘要文本生成、句子相似度判断、三元组抽取、预训练模型等。
Turkish BERT/DistilBERT, ELECTRA, ConvBERT and T5 models
#自然语言处理#Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
#自然语言处理#Build and train state-of-the-art natural language processing models using BERT
Pytorch-Named-Entity-Recognition-with-transformers
DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models
中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
#自然语言处理# ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Turkish-Reading-Comprehension-Question-Answering-Dataset
Baseline code for Korean open domain question answering(ODQA)
#自然语言处理#Electra pre-trained model using Vietnamese corpus
#自然语言处理#基于bert4keras的GLUE基准代码