#计算机科学#Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
#自然语言处理#Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
#自然语言处理#BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
翻译 - 可视化Transformer模型中的注意力的工具(BERT,GPT-2,Albert,XLNet,RoBERTa,CTRL等)
#自然语言处理#Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
#自然语言处理#Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
翻译 - PyTorch 中的开源预训练模型框架和预训练模型 Zoo
#自然语言处理#Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
RoBERTa中文预训练模型: RoBERTa for Chinese
#网络爬虫#news-please - an integrated web crawler and information extractor for news that just works
The implementation of DeBERTa
翻译 - DeBERTa的实施
#自然语言处理#🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
#自然语言处理#a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
翻译 - 一个快速且用户友好的工具,用于在CPU和GPU上进行变压器推断
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
#自然语言处理#Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
#自然语言处理#Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
#自然语言处理#🤖 A PyTorch library of curated Transformer models and their composable components
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型