#自然语言处理#Sparsity-aware deep learning inference runtime for CPUs
#自然语言处理#Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
翻译 - 使用几行代码将稀疏化配方应用于神经网络的库,实现更快和更小的模型
Code for CRATE (Coding RAte reduction TransformEr).
#计算机科学#A research library for pytorch-based neural network pruning, compression, and more.
Complex-valued neural networks for pytorch and Variational Dropout for real and complex layers.
Repository to track the progress in model compression and acceleration
Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".
[CVPR 2023] Efficient Map Sparsification Based on 2D and 3D Discretized Grids
#计算机科学#(Unstructured) Weight Pruning via Adaptive Sparsity Loss
#计算机科学#Sparsify Your Flux Models
TensorFlow implementation of weight and unit pruning and sparsification
#计算机科学#Feather is a module that enables effective sparsification of neural networks during training. This repository accompanies the paper "Feather: An Elegant Solution to Effective DNN Sparsification" (BMVC...
Simple Implementation of the CVPR 2024 Paper "JointSQ: Joint Sparsification-Quantization for Distributed Learning"
TensorFlow implementation of weight and unit pruning and sparsification
🌠 Enhanced Network Compression Through Tensor Decompositions and Pruning
A simple C++14 and CUDA-based header-only library with tools for sparse-machine learning.
An implementation and report of the twice Ramanujan graph sparsifiers.
The communication efficiency of federated learning is improved by sparsifying the parameters uploaded by the clients.