On-Device Training with ONNX Runtime
On-Device Training
refers to the process of training a model on an edge device, such as mobile phones, embedded devices, gaming consoles, web browsers, etc. This is in contrast to training a model on a server or a cloud. Training on the device can be used for:
- Personalization tasks, where the model needs to be trained on the user’s data.
- Federated learning tasks, where the model is locally trained on data that is distributed across multiple devices in an effort to build a more robust aggregated global model.
- Improving data privacy and security, especially when working with sensitive data that cannot be shared with a server or a cloud.
- Training locally (without impacting application functionality) when network connectivity is unreliable or limited.
ONNX Runtime Training
offers an easy way to efficiently train and infer ONNX models on edge devices. The training process is divided into two phases:
The Offline Phase
In this phase, training artifacts are prepared on a server, cloud or a desktop that does not have access to user data. These artifacts can be generated by using the ONNX Runtime Training
’s artifact generation Python tools available in the Python package.
Refer to the installation instructions
The Training Phase
Once these artifacts are generated, they can be deployed to production scenarios on edge devices. ONNX Runtime
offers a wide range of packages in multiple language bindings.
Refer to the installation instructions for a complete list of all language bindings.
Once training on the edge device is complete, an inference-ready ONNX model can be generated on the edge device itself. This model can then be used with ONNX Runtime for inferencing.
Installation
Refer to the installation instructions for details on how to install for your scenario.
Building from Source
Refer to the build instructions for details on how to build for your custom scenario.
Feature Request, Bug Report or Help Needed
In case you need help, please open an issue.