Skip to content

Installation

This section describe how to install the tool. We assume you are comfortable with Python programming language and familiar with Machine Learning models.

Prerequisites

The following prerequisites must be fulfilled to use Triton Model Navigator

  • Installed Python 3.8+
  • Installed NVIDIA TensorRT for TensorRT models export.

We recommend to use NGC Containers for PyTorch and TensorFlow which provide have all necessary dependencies:

The library can be installed in:

  • system environment
  • virtualenv
  • Docker

The NVIDIA optimized Docker images for Python frameworks could be obtained from NVIDIA NGC Catalog.

For using NVIDIA optimized Docker images we recommend to install NVIDIA Container Toolkit to run model inference on NVIDIA GPU.

Installing from source

To install Triton Model Navigator from source use pip command:

$ pip install --extra-index-url https://pypi.ngc.nvidia.com .[<extras,>]

Extras:

  • tensorflow - Model Navigator for TensorFlow2
  • jax - Model Navigator for JAX

Building the wheel

The Triton Model Navigator can be built as wheel. On that purpose the Makefile provide necessary commands.

The first is required to install necessary packages to perform build.

make install-dev

Once the environment contain required packages run:

make dist

The wheel is going to be generated in dist catalog.