ONNX Runtime supports TensorFlow models converted to ONNX format through the tf2onnx converter. This enables TensorFlow-trained models to leverage ONNX Runtime's hardware acceleration and cross-platform deployment capabilities.
ONNX Runtime natively supports models exported from PyTorch via the ONNX format. PyTorch models can be converted to ONNX using torch.onnx.export() and then optimized and deployed through ONNX Runtime for production inference across diverse hardware targets.
Keras models can be exported to ONNX format for deployment across different inference engines and hardware accelerators. This enables Keras-trained models to run on ONNX Runtime, TensorRT, OpenVINO, and other ONNX-compatible execution providers.