Skip to content

Installation

This section describe how to install the tool. We assume you are comfortable with Python programming language and familiar with Machine Learning models.

Prerequisites

The following prerequisites must be fulfilled to use Triton Model Navigator

  • Installed Python 3.8+
  • Installed NVIDIA TensorRT for TensorRT models export.

We recommend to use NGC Containers for PyTorch and TensorFlow which provide have all necessary dependencies:

The library can be installed in:

  • system environment
  • virtualenv
  • Docker

The NVIDIA optimized Docker images for Python frameworks could be obtained from NVIDIA NGC Catalog.

For using NVIDIA optimized Docker images we recommend to install NVIDIA Container Toolkit to run model inference on NVIDIA GPU.

Installation

The package can be installed from pypi.org using extra index url:

pip install -U --extra-index-url https://pypi.ngc.nvidia.com triton-model-navigator[<extras,>]

or with nvidia-pyindex:

pip install nvidia-pyindex
pip install -U triton-model-navigator[<extras,>]

To install Triton Model Navigator from source use pip command:

$ pip install --extra-index-url https://pypi.ngc.nvidia.com .[<extras,>]

Extras:

  • tensorflow - Model Navigator with dependencies for TensorFlow2
  • jax - Model Navigator with dependencies for JAX

For using with PyTorch no extras are needed.

Building the wheel

The Triton Model Navigator can be built as wheel. On that purpose the Makefile provide necessary commands.

The first is required to install necessary packages to perform build.

make install-dev

Once the environment contain required packages run:

make dist

The wheel is going to be generated in dist catalog.