#计算机科学#An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
翻译 - 用于自动化机器学习生命周期的开源AutoML工具包,包括功能工程,神经体系结构搜索,模型压缩和超参数调整。
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
翻译 - [CVPR2020]超越MobileNetV3:“ GhostNet:廉价运营带来的更多功能”
#计算机科学#Awesome Knowledge Distillation
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
翻译 - 华为诺亚方舟实验室开发的预训练语言模型及其相关优化技术。
[CVPR 2023] DepGraph: Towards Any Structural Pruning
#计算机科学#An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
翻译 - 自动模型压缩(AutoMC)框架,用于开发更小,更快的AI应用程序。
#计算机科学#Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。
#Awesome#A curated list of neural network pruning resources.
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Ari...
翻译 - 基于pytorch的模型压缩(1,量化:8/4 / 2bits(dorefa),三进制/二进制值(twn / bnn / xnornet); 2,修剪:常规,常规和组卷积通道修剪; 3,组卷积结构; 4,特征(A)的二进制值的分批归一化折叠)
#计算机科学#A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (pape...
A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
Pytorch implementation of various Knowledge Distillation (KD) methods.
#计算机科学#A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
翻译 - 新手和高级用户都可以使用一套工具来优化机器学习模型以进行部署和执行。
#自然语言处理#NLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego
翻译 - NLP DNN工具包-像玩乐高游戏一样建立NLP DNN模型
Efficient computing methods developed by Huawei Noah's Ark Lab
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
#计算机科学#Collection of recent methods on (deep) neural network compression and acceleration.
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
#Awesome#A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hype...
#计算机科学#TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.