Inplace Optimize
Model Navigator Inplace Optimize.
model_navigator.inplace.wrapper
Inplace Optimize model wrapper.
Module
Module(
module,
optimize_config=None,
name=None,
input_mapping=None,
output_mapping=None,
timer=None,
offload_parameters_to_cpu=False,
)
Bases: ObjectProxy
Inplace Optimize module wrapper.
This class wraps a torch module and provides inplace optimization functionality. Depening on the configuration set in config, the module will be optimized, recorded, or passed through.
This wrapper can be used in place of a torch module, and will behave identically to the original module.
Parameters:
-
module
(Module
) –torch module to wrap.
-
optimize_config
(Optional[OptimizeConfig]
, default:None
) –optimization configuration.
-
name
(Optional[str]
, default:None
) –module name.
-
input_mapping
(Optional[Callable]
, default:None
) –function to map module inputs to the expected input.
-
output_mapping
(Optional[Callable]
, default:None
) –function to map module outputs to the expected output.
-
offload_parameters_to_cpu
(bool
, default:False
) –offload parameters to cpu.
Example
import torch import model_navigator as nav model = torch.nn.Linear(10, 10) model = nav.Module(model)
Initialize Module.
Source code in model_navigator/inplace/wrapper.py
is_ready_for_optimization
property
Check if the module is ready for optimization.
__call__
Call the wrapped module.
This method overrides the call method of the wrapped module. If the module is already optimized it is replaced with the optimized one.
Source code in model_navigator/inplace/wrapper.py
load_optimized
Load optimized module.
Source code in model_navigator/inplace/wrapper.py
optimize
Optimize the module.
Source code in model_navigator/inplace/wrapper.py
module
module(module_callable=None, optimize_config=None, name=None, input_mapping=None, output_mapping=None)
Inplace Optimize module wrapper decorator.
This decorator wraps a torch module and provides inplace optimization functionality. Depening on the configuration set in config, the module will be optimized, recorded, or passed through.
This wrapper can be used in place of a torch module, and will behave identically to the original module.
Parameters:
-
module_callable
(Optional[Callable[[Any], Module]]
, default:None
) –decorated callable.
-
optimize_config
(Optional[OptimizeConfig]
, default:None
) –optimization configuration.
-
name
(Optional[str]
, default:None
) –module name.
-
input_mapping
(Optional[Callable]
, default:None
) –function to map module inputs to the expected input.
-
output_mapping
(Optional[Callable]
, default:None
) –function to map module outputs to the expected output.
Example
import torch import model_navigator as nav @nav.module ... def my_model(): ... return torch.nn.Linear(10, 10) model = my_model()
Source code in model_navigator/inplace/wrapper.py
model_navigator.inplace.config
Inplace Optimize configuration.
InplaceConfig
Inplace Optimize configuration.
Initialize InplaceConfig.
Source code in model_navigator/inplace/config.py
max_num_samples_stored
property
writable
Get the minimum number of samples to collect before optimizing.
min_num_samples
property
writable
Get the minimum number of samples to collect before optimizing.
Mode
Bases: Enum
Mode of the inplace Optimize.
OPTIMIZE: record registered models and optimize them when enough samples are collected. RECORDING: record registered models. RUN: replace registered models with optimized ones. PASSTHROUGH: do nothing.
OptimizeConfig
dataclass
Configuration for inplace Optimize.
Parameters:
-
sample_count
(int
, default:DEFAULT_SAMPLE_COUNT
) –Limits how many samples will be used from dataloader
-
batching
(Optional[bool]
, default:True
) –Enable or disable batching on first (index 0) dimension of the model
-
input_names
(Optional[Tuple[str, ...]]
, default:None
) –Model input names
-
output_names
(Optional[Tuple[str, ...]]
, default:None
) –Model output names
-
target_formats
(Optional[Tuple[Union[str, Format], ...]]
, default:None
) –Target model formats for optimize process
-
target_device
(Optional[DeviceKind]
, default:CUDA
) –Target device for optimize process, default is CUDA
-
runners
(Optional[Tuple[Union[str, Type[NavigatorRunner]], ...]]
, default:None
) –Use only runners provided as parameter
-
optimization_profile
(Optional[OptimizationProfile]
, default:None
) –Optimization profile for conversion and profiling
-
workspace
(Optional[Path]
, default:None
) –Workspace where packages will be extracted
-
verbose
(Optional[bool]
, default:False
) –Enable verbose logging
-
debug
(Optional[bool]
, default:False
) –Enable debug logging from commands
-
verify_func
(Optional[VerifyFunction]
, default:None
) –Function for additional model verification
-
custom_configs
(Optional[Sequence[CustomConfig]]
, default:None
) –Sequence of CustomConfigs used to control produced artifacts
to_dict
Convert OptimizeConfig to dictionary.