Installation
Prerequisites
Before proceeding with the installation of the Triton Model Navigator, ensure your system meets the following criteria:
- Operating System: Linux (Ubuntu 20.04+ recommended)
- Python: Version
3.8
or newer - NVIDIA GPU
You can use NGC Containers for PyTorch and TensorFlow which contain all necessary dependencies:
The library can be installed in:
- system environment
- virtualenv
- Docker
The NVIDIA optimized Docker images for Python frameworks could be obtained from NVIDIA NGC Catalog.
For using NVIDIA optimized Docker images, we recommend installing NVIDIA Container Toolkit to run model inference on NVIDIA GPU.
Install
The Triton Model Navigator can be installed from pypi.org
.
Installing with PyTorch extras
For installing with PyTorch dependencies, use:
or with nvidia-pyindex:
Installing with TensorFlow extras
For installing with TensorFlow dependencies, use:
or with nvidia-pyindex:
Installing with JAX extras (experimental)
For installing with JAX dependencies, use:
or with nvidia-pyindex:
Installing with onnxruntime-gpu for CUDA 11
The default CUDA version for ONNXRuntime since 1.19.0 is CUDA 12. To install with CUDA 11 support use following extra index url:
.. --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/ ..
Building the wheel
The Triton Model Navigator can be built as a wheel. We have prepared all necessary steps under Makefile
command.
Firstly, install the Triton Model Navigator with development packages:
Next, simply run:
The wheel will be generated in the dist
catalog.