#自然语言处理#学习如何设计、开发、部署、和迭代生产级机器学习应用
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT)...
翻译 - PyTorch图像模型,脚本,预训练权重-(SE)ResNet / ResNeXT,DPN,EfficientNet,MixNet,MobileNet-V3 / V2,MNASNet,单路径NAS,FBNet等
#计算机科学#PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
#搜索#PaddleNLP 2.0是飞桨生态的文本领域核心库,具备易用的文本领域API,多场景的应用示例、和高性能分布式训练三大特点,旨在提升开发者文本领域的开发效率,并提供基于飞桨2.0核心框架的NLP任务最佳实践。
#计算机科学#SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 14+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.
Fengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。
#计算机科学#FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on a...
#计算机科学#A high performance and generic framework for distributed DNN training
翻译 - 分布式DNN培训的高性能通用框架
#计算机科学#Fast and flexible AutoML with learning guarantees.
翻译 - 具有学习保证的快速灵活的AutoML。
#计算机科学#Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
翻译 - 确定:深度学习培训平台
#大语言模型#Training and serving large-scale neural networks with auto parallelization.
#计算机科学#Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
翻译 - pytorch中的去中心化深度学习框架。旨在为全球数千名志愿者训练模型。
DLRover: An Automatic Distributed Deep Learning System
Library for Fast and Flexible Human Pose Estimation
#搜索#DeepRec is a high-performance recommendation deep learning framework based on TensorFlow. It is hosted in incubation in LF AI & Data Foundation.
#计算机科学#Efficient Deep Learning Systems course materials (HSE, YSDA)
#自然语言处理#Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
#计算机科学#Resource-adaptive cluster scheduler for deep learning training.
#自然语言处理#LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Best practices & guides on how to write distributed pytorch training code