Skip to content

Examples

We provide simple examples on how to integrate PyTorch, TensorFlow2, JAX, and simple Python models with the Triton Inference Server using PyTriton. The examples are available in the GitHub repository.

Samples Models Deployment

The list of example models deployments:

Profiling models

The Perf Analyzer can be used to profile the models served through PyTriton. We have prepared an example of using Perf Analyzer to profile BART PyTorch. See the example code in the GitHub repository.

Kubernetes Deployment

The following examples contain a guide on how to deploy them on a Kubernetes cluster: