Installation
This section describe how to install the tool. We assume you are comfortable with Python programming language and familiar with Machine Learning models.
Prerequisites
The following prerequisites must be fulfilled to use Triton Model Navigator
- Installed Python
3.8+
- Installed NVIDIA TensorRT for TensorRT models export.
We recommend to use NGC Containers for PyTorch and TensorFlow which provide have all necessary dependencies:
The library can be installed in:
- system environment
- virtualenv
- Docker
The NVIDIA optimized Docker images for Python frameworks could be obtained from NVIDIA NGC Catalog.
For using NVIDIA optimized Docker images we recommend to install NVIDIA Container Toolkit to run model inference on NVIDIA GPU.
Installation
The package can be installed from pypi.org
using extra index url:
or with nvidia-pyindex:
To install Triton Model Navigator from source use pip command:
Extras:
tensorflow
- Model Navigator with dependencies for TensorFlow2jax
- Model Navigator with dependencies for JAX
For using with PyTorch no extras are needed.
Building the wheel
The Triton Model Navigator can be built as wheel. On that purpose the Makefile provide necessary commands.
The first is required to install necessary packages to perform build.
Once the environment contain required packages run:
The wheel is going to be generated in dist
catalog.