中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
#自然语言处理#Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
翻译 - 汉语语言理解评估基准:数据集,基线,预训练模型,语料库和排行榜
#自然语言处理#A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemente...
A serverless architecture for orchestrating ETL jobs in arbitrarily-complex workflows using AWS Step Functions and AWS Lambda.
#自然语言处理#Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
#自然语言处理#⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).
Semantics-aware BERT for Language Understanding (AAAI 2020)
Semantics-aware BERT for Language Understanding (AAAI 2020)
🛸 Run C++ code on web and create blazingly fast websites! A starter template to easily create WebAssembly packages using type-safe C++ bindings with automatic TypeScript declarations.
AWS tutorial code.
ALBERT model Pretraining and Fine Tuning using TF2.0
pytorch implementation for Patient Knowledge Distillation for BERT Model Compression
#自然语言处理#Datasets collection and preprocessings framework for NLP extreme multitask learning
#自然语言处理#Implementation of XLNet that can load pretrained checkpoints