Training and deploying machine learning models on Amazon SageMaker
Run Local LLMs on Any Device. Open-source
The official Python client for the Huggingface Hub
Single-cell analysis in Python
Ready-to-use OCR with 80+ supported languages
A high-throughput and memory-efficient inference and serving engine
FlashInfer: Kernel Library for LLM Serving
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
DoWhy is a Python library for causal inference
Operating LLMs in production
Uplift modeling and causal inference with machine learning algorithms
Adversarial Robustness Toolbox (ART) - Python Library for ML security
A unified framework for scalable computing
Integrate, train and manage any AI models and APIs with your database
Everything you need to build state-of-the-art foundation models
Large Language Model Text Generation Inference
Easiest and laziest way for building multi-agent LLMs applications
A library for accelerating Transformer models on NVIDIA GPUs
A Pythonic framework to simplify AI service building
The Triton Inference Server provides an optimized cloud
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Optimizing inference proxy for LLMs
Superduper: Integrate AI models and machine learning workflows
A high-performance ML model serving framework, offers dynamic batching