WebNN Spec
Web Platform
Tests
Chromium Implementation

XNNPack · CPU backend 1

DirectML · GPU backend 2

MLService · CPU backend 3
Operations 4 Operations 5 Operations 6
XNNPack/CPU backend
DirectML/GPU backend
MLServie/CPU backend
✅ Supported ()
⏳ Partly Implemented ()
🚀 Work in Progress ()
❌ Not Supported
✅ Supported ()
⏳ Partly Implemented ()
🚀 Work in Progress ()
❌ Not Supported
✅ Supported ()
⏳ Partly Implemented ()
🚀 Work in Progress ()
❌ Not Supported

JavaScript ML Frameworks Integration Status


WebNN Spec
JavaScript ML Frameworks Integration

Execution Provider 7
Lite for TF.js
External Delegate 8
Operations EP Version Operations Delegate Version
ONNX Runtime Web
Execution Provider
TensorFlow.js/TFLite
External Delegate
✅ Supported ()
⏳ Partly Implemented ()
🚀 Work in Progress ()
❌ Not Supported
✅ Supported ()
⏳ Partly Implemented ()
🚀 Work in Progress ()
❌ Not Supported
The total number of WebNN ops is 78. These tables currently lists ops that are implemented or work in progress by multiple backends and JavaScript machine learning frameworks.

[1] XNNPack node definition in xnn_define_*
[2] DirectML API
[3] MLService / TensorFlow Lite Builtin Options
[4] This feature is experimental. Can be enabled by setting #web-machine-learning-neural-network flag to Enabled. Supported on CPUs on Windows.
[5] This feature is experimental. Can be enabled by setting #web-machine-learning-neural-network flag to Enabled. Supported on GPUs on Windows 11 21H2 or higher.
[6] This feature is experimental. Can be enabled by setting #web-machine-learning-neural-network flag to Enabled. Supported on CPUs on ChromeOS.
[7] ONNX Operator Schemas and WebNN EP Helper
[8] TensorFlow Lite built-in operators kTfLiteBuiltin*

Interested in contributing to this implementation status page? See contributing guidelines.