Skip to content

Building binary package from source

This guide provides an outline of the process for building the PyTriton binary package from source. It offers the flexibility to modify the PyTriton code and integrate it with various versions of the Triton Inference Server, including custom builds. Additionally, it allows you to incorporate hotfixes that have not yet been officially released.

Prerequisites

Before building the PyTriton binary package, ensure the following:

  • Docker is installed on the system. For more information, refer to the Docker documentation.
  • Access to the Docker daemon is available from the system or container.

Building PyTriton binary package

To build the wheel binary package, follow these steps from the root directory of the project:

make install-dev
make dist

The wheel package will be located in the dist directory. To install the library, run the following pip command:

pip install dist/nvidia_pytriton-*-py3-none-*_x86_64.whl

Building for a specific Triton Inference Server version

Building for an unsupported OS or hardware platform is possible. PyTriton requires a Python backend and either an HTTP or gRPC endpoint. The build can be CPU-only, as inference is performed on Inference Handlers.

For more information on the Triton Inference Server build process, refer to the building section of Triton Inference Server documentation.

Untested Build

The Triton Inference Server has only been rigorously tested on Ubuntu 20.04. Other OS and hardware platforms are not officially supported. You can test the build by following the steps outlined in the Triton Inference Server testing guide.

Using the following docker method steps, you can create a tritonserver:latest Docker image that can be used to build PyTriton with the following command:

By the following docker method steps you can create a tritonserver:latest Docker image that can be used to build PyTriton with the following command:

make TRITONSERVER_IMAGE_NAME=tritonserver:latest dist