Skip to content

Specialized Configs for Triton Backends

The Python API provides specialized configuration classes that help provide only available options for the given type of model.

model_navigator.api.triton.BaseSpecializedModelConfig dataclass

Bases: ABC

Common fields for specialized model configs.

Read more in Triton Inference server documentation

Parameters:

  • max_batch_size (int, default: 4 ) –

    The maximal batch size that would be handled by model.

  • batching (bool, default: True ) –

    Flag to enable/disable batching for model.

  • default_model_filename (Optional[str], default: None ) –

    Optional filename of the model file to use.

  • batcher (Union[DynamicBatcher, SequenceBatcher], default: field(default_factory=DynamicBatcher) ) –

    Configuration of Dynamic Batching for the model.

  • instance_groups (List[InstanceGroup], default: field(default_factory=lambda : []) ) –

    Instance groups configuration for multiple instances of the model

  • parameters (Dict[str, str], default: field(default_factory=lambda : {}) ) –

    Custom parameters for model or backend

  • response_cache (bool, default: False ) –

    Flag to enable/disable response cache for the model

  • warmup (Dict[str, ModelWarmup], default: field(default_factory=lambda : {}) ) –

    Warmup configuration for model

backend abstractmethod property

backend

Backend property that has to be overriden by specialized configs.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/base_model_config.py
def __post_init__(self) -> None:
    """Validate the configuration for early error handling."""
    if self.batching and self.max_batch_size <= 0:
        raise ModelNavigatorWrongParameterError("The `max_batch_size` must be greater or equal to 1.")

    if type(self.batcher) not in [DynamicBatcher, SequenceBatcher]:
        raise ModelNavigatorWrongParameterError("Unsupported batcher type provided.")

    if self.backend != Backend.TensorRT and any(group.profile for group in self.instance_groups):
        raise ModelNavigatorWrongParameterError(
            "Invalid `profile` option. The value can be set only for `backend=Backend.TensorRT`"
        )

model_navigator.api.triton.ONNXModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for ONNX backend supported model.

Parameters:

  • platform (Optional[Platform], default: None ) –

    Override backend parameter with platform. Possible options: Platform.ONNXRuntimeONNX

  • optimization (Optional[ONNXOptimization], default: None ) –

    Possible optimization for ONNX models

backend property

backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/onnx_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    super().__post_init__()
    if self.optimization and not isinstance(self.optimization, ONNXOptimization):
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

    if self.platform and self.platform != Platform.ONNXRuntimeONNX:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use: {Platform.ONNXRuntimeONNX}.")

model_navigator.api.triton.ONNXOptimization dataclass

ONNX possible optimizations.

Parameters:

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/onnx_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    if self.accelerator and type(self.accelerator) not in [OpenVINOAccelerator, TensorRTAccelerator]:
        raise ModelNavigatorWrongParameterError("Unsupported accelerator type provided.")

model_navigator.api.triton.PythonModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for Python backend supported model.

Parameters:

backend property

backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/python_model_config.py
def __post_init__(self) -> None:
    """Validate the configuration for early error handling."""
    super().__post_init__()
    assert len(self.inputs) > 0, "Model inputs definition is required for Python backend."
    assert len(self.outputs) > 0, "Model outputs definition is required for Python backend."

model_navigator.api.triton.PyTorchModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for PyTorch backend supported model.

Parameters:

  • platform (Optional[Platform], default: None ) –

    Override backend parameter with platform. Possible options: Platform.PyTorchLibtorch

  • inputs (Sequence[InputTensorSpec], default: field(default_factory=lambda : []) ) –

    Required definition of model inputs

  • outputs (Sequence[OutputTensorSpec], default: field(default_factory=lambda : []) ) –

    Required definition of model outputs

backend property

backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/pytorch_model_config.py
def __post_init__(self) -> None:
    """Validate the configuration for early error handling."""
    super().__post_init__()
    assert len(self.inputs) > 0, "Model inputs definition is required for PyTorch backend."
    assert len(self.outputs) > 0, "Model outputs definition is required for PyTorch backend."

    if self.platform and self.platform != Platform.PyTorchLibtorch:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use: {Platform.PyTorchLibtorch}.")

model_navigator.api.triton.TensorFlowModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for TensorFlow backend supported model.

Parameters:

  • platform (Optional[Platform], default: None ) –

    Override backend parameter with platform. Possible options: Platform.TensorFlowSavedModel, Platform.TensorFlowGraphDef

  • optimization (Optional[TensorFlowOptimization], default: None ) –

    Possible optimization for TensorFlow models

backend property

backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorflow_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    super().__post_init__()
    if self.optimization and not isinstance(self.optimization, TensorFlowOptimization):
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

    platforms = [Platform.TensorFlowSavedModel, Platform.TensorFlowGraphDef]
    if self.platform and self.platform not in platforms:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use one of: {platforms}")

model_navigator.api.triton.TensorFlowOptimization dataclass

TensorFlow possible optimizations.

Parameters:

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorflow_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    if self.accelerator and type(self.accelerator) not in [
        AutoMixedPrecisionAccelerator,
        GPUIOAccelerator,
        TensorRTAccelerator,
    ]:
        raise ModelNavigatorWrongParameterError("Unsupported accelerator type provided.")

model_navigator.api.triton.TensorRTModelConfig dataclass

Bases: BaseSpecializedModelConfig

Specialized model config for TensorRT platform supported model.

Parameters:

  • platform (Optional[Platform], default: None ) –

    Override backend parameter with platform. Possible options: Platform.TensorRTPlan

  • optimization (Optional[TensorRTOptimization], default: None ) –

    Possible optimization for TensorRT models

backend property

backend

Define backend value for config.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorrt_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    super().__post_init__()
    if self.optimization and not isinstance(self.optimization, TensorRTOptimization):
        raise ModelNavigatorWrongParameterError("Unsupported optimization type provided.")

    if self.platform and self.platform != Platform.TensorRTPlan:
        raise ModelNavigatorWrongParameterError(f"Unsupported platform provided. Use: {Platform.TensorRTPlan}.")

model_navigator.api.triton.TensorRTOptimization dataclass

TensorRT possible optimizations.

Parameters:

  • cuda_graphs (bool, default: False ) –

    Use CUDA graphs API to capture model operations and execute them more efficiently.

  • gather_kernel_buffer_threshold (Optional[int], default: None ) –

    The backend may use a gather kernel to gather input data if the device has direct access to the source buffer and the destination buffer.

  • eager_batching (bool, default: False ) –

    Start preparing the next batch before the model instance is ready for the next inference.

__post_init__

__post_init__()

Validate the configuration for early error handling.

Source code in model_navigator/triton/specialized_configs/tensorrt_model_config.py
def __post_init__(self):
    """Validate the configuration for early error handling."""
    if not self.cuda_graphs and not self.gather_kernel_buffer_threshold and not self.eager_batching:
        raise ModelNavigatorWrongParameterError("At least one of the optimization options should be enabled.")