An open-source project for Windows developers to learn how to add AI with local models and APIs to Windows apps.
Efficient Inference of Transformer models
FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference
hardware design of universal NPU(CNN accelerator) for various convolution neural network
#大语言模型#High-speed and easy-use LLM serving framework for local deployment
#安卓#Simplified AI runtime integration for mobile app development
#大语言模型#Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support ( rkllm )
Run your yolov7 object detection with Rockchip NPU platforms (RK3566, RK3568, RK3588, RK3588S, RV1103, RV1106, RK3562).
Advanced driver-assistance system with Google Coral Edge TPU Dev Board / USB Accelerator, Intel Movidius NCS (neural compute stick), Myriad 2/X VPU, Gyrfalcon 2801 Neural Accelerator, NVIDIA Jetson N...
#大语言模型#EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU
#安卓#Kotlin bindings for Edgerunner
#计算机科学#NPUsim: Full-Model, Cycle-Level, and Value-Aware Simulator for DNN Accelerators