+
+

Related Products

  • RunPod
    205 Ratings
    Visit Website
  • Google Compute Engine
    1,170 Ratings
    Visit Website
  • Vertex AI
    961 Ratings
    Visit Website
  • Dragonfly
    16 Ratings
    Visit Website
  • LM-Kit.NET
    26 Ratings
    Visit Website
  • Qloo
    23 Ratings
    Visit Website
  • Fraud.net
    56 Ratings
    Visit Website
  • Dataiku
    204 Ratings
    Visit Website
  • Kamatera
    152 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website

About

AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost for your deep learning (DL) inference applications. The first-generation AWS Inferentia accelerator powers Amazon Elastic Compute Cloud (Amazon EC2) Inf1 instances, which deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable GPU-based Amazon EC2 instances. Many customers, including Airbnb, Snap, Sprinklr, Money Forward, and Amazon Alexa, have adopted Inf1 instances and realized its performance and cost benefits. The first-generation Inferentia has 8 GB of DDR4 memory per accelerator and also features a large amount of on-chip memory. Inferentia2 offers 32 GB of HBM2e per accelerator, increasing the total memory by 4x and memory bandwidth by 10x over Inferentia.

About

Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Sagemaker instances or Amazon ECS tasks, to reduce the cost of running deep learning inference by up to 75%. Amazon Elastic Inference supports TensorFlow, Apache MXNet, PyTorch and ONNX models. Inference is the process of making predictions using a trained model. In deep learning applications, inference accounts for up to 90% of total operational costs for two reasons. Firstly, standalone GPU instances are typically designed for model training - not for inference. While training jobs batch process hundreds of data samples in parallel, inference jobs usually process a single input in real time, and thus consume a small amount of GPU compute. This makes standalone GPU inference cost-inefficient. On the other hand, standalone CPU instances are not specialized for matrix operations, and thus are often too slow for deep learning inference.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Companies searching for an advanced Deep Learning solution

Audience

IT teams that need an advanced Infrastructure as a Service solution

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Amazon
Founded: 2006
United States
aws.amazon.com/machine-learning/inferentia/

Company Information

Amazon
Founded: 2006
United States
aws.amazon.com/machine-learning/elastic-inference/

Alternatives

AWS Neuron

AWS Neuron

Amazon Web Services

Alternatives

AWS Neuron

AWS Neuron

Amazon Web Services

Categories

Categories

Integrations

AWS EC2 Trn3 Instances
AWS Parallel Computing Service
Amazon EC2
Amazon EC2 G4 Instances
Amazon EC2 Inf1 Instances
Amazon EC2 Trn1 Instances
Amazon Web Services (AWS)
Anyscale
MXNet
PyTorch
TensorFlow
WithoutBG

Integrations

AWS EC2 Trn3 Instances
AWS Parallel Computing Service
Amazon EC2
Amazon EC2 G4 Instances
Amazon EC2 Inf1 Instances
Amazon EC2 Trn1 Instances
Amazon Web Services (AWS)
Anyscale
MXNet
PyTorch
TensorFlow
WithoutBG
Claim AWS Inferentia and update features and information
Claim AWS Inferentia and update features and information
Claim Amazon Elastic Inference and update features and information
Claim Amazon Elastic Inference and update features and information