Skip to content

Specialized Configs for Triton Backends

The Python API provides specialized configuration classes that help provide only available options for the given type of model.

model_navigator.api.triton.ONNXModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for ONNX backend supported model.

Parameters:

  • platform (Optional[Platform]) –

    Override backend parameter with platform. Possible options: Platform.ONNXRuntimeONNX

  • optimization (Optional[ONNXOptimization]) –

    Possible optimization for ONNX models

backend property

backend: Backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/onnx_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    super().__post_init__()
    if self.optimization and not isinstance(self.optimization, ONNXOptimization):
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

    if self.platform and self.platform != Platform.ONNXRuntimeONNX:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use: {Platform.ONNXRuntimeONNX}.")

model_navigator.api.triton.ONNXOptimization dataclass

ONNX possible optimizations.

Parameters:

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/onnx_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    if self.accelerator and type(self.accelerator) not in [OpenVINOAccelerator, TensorRTAccelerator]:
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

model_navigator.api.triton.PythonModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for Python backend supported model.

Parameters:

backend property

backend: Backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/python_model_config.py
def __post_init__(self) -> None:
    """Validate the configuration for early error handling."""
    super().__post_init__()
    assert len(self.inputs) > 0, "Model inputs definition is required for Python backend."
    assert len(self.outputs) > 0, "Model outputs definition is required for Python backend."

model_navigator.api.triton.PyTorchModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for PyTorch backend supported model.

Parameters:

backend property

backend: Backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/pytorch_model_config.py
def __post_init__(self) -> None:
    """Validate the configuration for early error handling."""
    super().__post_init__()
    assert len(self.inputs) > 0, "Model inputs definition is required for PyTorch backend."
    assert len(self.outputs) > 0, "Model outputs definition is required for PyTorch backend."

    if self.platform and self.platform != Platform.PyTorchLibtorch:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use: {Platform.PyTorchLibtorch}.")

model_navigator.api.triton.TensorFlowModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for TensorFlow backend supported model.

Parameters:

  • platform (Optional[Platform]) –

    Override backend parameter with platform. Possible options: Platform.TensorFlowSavedModel, Platform.TensorFlowGraphDef

  • optimization (Optional[TensorFlowOptimization]) –

    Possible optimization for TensorFlow models

backend property

backend: Backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorflow_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    super().__post_init__()
    if self.optimization and not isinstance(self.optimization, TensorFlowOptimization):
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

    platforms = [Platform.TensorFlowSavedModel, Platform.TensorFlowGraphDef]
    if self.platform and self.platform not in platforms:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use one of: {platforms}")

model_navigator.api.triton.TensorFlowOptimization dataclass

TensorFlow possible optimizations.

Parameters:

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorflow_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    if self.accelerator and type(self.accelerator) not in [
        AutoMixedPrecisionAccelerator,
        GPUIOAccelerator,
        TensorRTAccelerator,
    ]:
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

model_navigator.api.triton.TensorRTModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for TensorRT platform supported model.

Parameters:

  • platform (Optional[Platform]) –

    Override backend parameter with platform. Possible options: Platform.TensorRTPlan

  • optimization (Optional[TensorRTOptimization]) –

    Possible optimization for TensorRT models

backend property

backend: Backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorrt_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    super().__post_init__()
    if self.optimization and not isinstance(self.optimization, TensorRTOptimization):
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

    if self.platform and self.platform != Platform.TensorRTPlan:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use: {Platform.TensorRTPlan}.")

model_navigator.api.triton.TensorRTOptimization dataclass

TensorRT possible optimizations.

Parameters:

  • cuda_graphs (bool) –

    Use CUDA graphs API to capture model operations and execute them more efficiently.

  • gather_kernel_buffer_threshold (Optional[int]) –

    The backend may use a gather kernel to gather input data if the device has direct access to the source buffer and the destination buffer.

  • eager_batching (bool) –

    Start preparing the next batch before the model instance is ready for the next inference.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorrt_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    if not self.cuda_graphs and not self.gather_kernel_buffer_threshold and not self.eager_batching:
        raise ModelNavigatorWrongParameterError("At least one of the optimization options should be enabled.")