By using ONNX Runtime with BERT, users can lower latency for language representation on the Bing search platform. Microsoft has previously said BERT brings “the largest improvement in search experience” for Bing. With ONNX Runtime support, developers can use BERT to scale to as low as 1.7 milliseconds latency (alongside a Nvidia V100 GPU). Microsoft told VentureBeat that this capability has previously only been available to major tech companies. “Since the BERT model is mainly composed of stacked transformer cells, we optimize each cell by fusing key sub-graphs of multiple elementary operators into single kernels for both CPU and GPU, including Self-Attention, LayerNormalization and Gelu layers. This significantly reduces memory copy between numerous elementary computations,” Microsoft senior program manager Emma Ning said today in a blog post.

ONNX

Open Neural Network Exchange (ONNX) creates a standard open platform for AI models that will work across frameworks. ONNX Runtime is a high-performance inference engine for machine learning creations across Windows, Linux, and Mac. Developers can use the service to train AI models in any framework and turn these models to production in the cloud and edge.

Microsoft Open Sources BERT for ONNX Runtime - 38Microsoft Open Sources BERT for ONNX Runtime - 32Microsoft Open Sources BERT for ONNX Runtime - 16Microsoft Open Sources BERT for ONNX Runtime - 96Microsoft Open Sources BERT for ONNX Runtime - 57