高效推理和训练引擎LightSeq
#自然语言处理#Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
翻译 - PyTorch 中的开源预训练模型框架和预训练模型 Zoo
#自然语言处理#Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
#自然语言处理#Self-contained Machine Learning and Natural Language Processing library in Go
翻译 - spaGO是一个用Go语言编写的美观且可维护的机器学习库,旨在在自然语言处理任务中支持相关的神经网络架构
#自然语言处理#Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
#自然语言处理#Multilingual/multidomain question generation datasets, models, and python library for question generation.
#自然语言处理#Cybertron: the home planet of the Transformers in Go
MinT: Minimal Transformer Library and Tutorials
#自然语言处理#Build and train state-of-the-art natural language processing models using BERT
code for AAAI2022 paper "Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot Sentiment Classification"
#自然语言处理#Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder LM (eg. Flan-T5).
#自然语言处理#Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper
#计算机科学#Automated Categorization: Utilizing the power of neural networks, this project offers an automated solution to categorize bank descriptions, reducing manual effort and enhancing efficiency while maint...
BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese (INTERSPEECH 2022)
Abstractive and Extractive Text summarization using Transformers.
#自然语言处理#NAACL 2021 - Progressive Generation of Long Text
#自然语言处理#Official implementation of the paper "IteraTeR: Understanding Iterative Revision from Human-Written Text" (ACL 2022)
#自然语言处理#Official implementation of the paper "IteraTeR: Understanding Iterative Revision from Human-Written Text" (ACL 2022)