Model Inputs and Outputs
          model_navigator.api.triton.InputTensorSpec
  
  
      dataclass
  
InputTensorSpec(name, shape, dtype=None, reshape=lambda: ()(), is_shape_tensor=False, optional=False, format=None, allow_ragged_batch=False)
            Bases: BaseTensorSpec
Stores specification of single input tensor.
This includes name, shape, dtype and more parameters available for input tensor in Triton Inference Server:
Read more in Triton Inference server model configuration
Parameters:
- 
        
optional(bool, default:False) –Flag marking the input is optional for the model execution
 - 
        
format(Optional[InputTensorFormat], default:None) –The format of the input.
 - 
        
allow_ragged_batch(bool, default:False) –Flag marking the input is allowed to be "ragged" in a dynamically created batch.
 
__post_init__
Validate the configuration for early error handling.
Source code in model_navigator/triton/specialized_configs/common.py
            model_navigator.api.triton.InputTensorFormat
            Bases: Enum
Format for input tensor.
Read more in Triton Inference server model configuration
Parameters:
- 
        
FORMAT_NONE–0
 - 
        
FORMAT_NHWC–1
 - 
        
FORMAT_NCHW–2
 
          model_navigator.api.triton.OutputTensorSpec
  
  
      dataclass
  
OutputTensorSpec(name, shape, dtype=None, reshape=lambda: ()(), is_shape_tensor=False, label_filename=None)
            Bases: BaseTensorSpec
Stores specification of single output tensor.
This includes name, shape, dtype and more parameters available for output tensor in Triton Inference Server:
Read more in Triton Inference server model configuration
Parameters:
__post_init__
Validate the configuration for early error handling.