Export Formats and Hyperparameters
Matrice.ai supports multiple formats for exporting your trained models. Here’s an overview of each format and the associated hyperparameters you can adjust.
Supported Export Formats
ONNX: Open Neural Network Exchange format for cross-platform compatibility.
OpenVino: Intel’s optimized format for inference on their hardware.
TorchScript: Serialized and optimized format for PyTorch models.
PyTorch: An open-source machine learning library developed by Facebook’s AI Research lab. It’s known for its flexibility and dynamic computational graphs.NVIDIA’s platform for high-performance deep learning inference. It optimizes trained deep learning models to produce highly optimized runtime engines.NVIDIA’s platform for high-performance inference.
TensorFlow: An open-source machine learning framework developed by Google. It’s widely used for both research and production.
Note : Export formats are model-dependent; some models offer all export options, while others have limited export options.
Hyperparameters
Key Name |
Value Type |
Default Value |
Predefined Values |
Description |
---|---|---|---|---|
|
Boolean |
False |
[True, False] |
Controls dynamic axes for variable input sizes. |
|
Boolean |
False |
[True, False] |
Simplifies the model structure during export. |
|
Boolean |
False |
[True, False] |
Enables INT8 quantization for smaller, faster models. |
|
Boolean |
False |
[True, False] |
Includes Non-Maximum Suppression for object detection models. |
|
Boolean |
False |
[True, False] |
Optimizes the model graph for better performance. |