A sparsity-aware enterprise inferencing system for AI models on CPUs. Maximize your CPU infrastructure with DeepSparse to run performant computer vision (CV), natural language processing (NLP), and large language models (LLMs).
Features
- Optimized for sparse deep learning models
- Enables high-speed inference on CPUs
- Supports ONNX model format for broad compatibility
- Works with sparsified versions of popular deep learning models
- Scales from edge devices to cloud deployments
- Integrates with PyTorch and TensorFlow models
License
MIT LicenseFollow DeepSparse
Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform
Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of DeepSparse!