Talks and Presentations

Xilinx Developer Forum (XDF) 2019 Europe

November 13, 2019

Presentation, World Forum, The Hague, Netherlands will describe how their RNN accelerator on an Alveo U250 outperforms alternatives in throughput, power and latency. They will describe how this has been achieved by exploiting unstructured sparsity and quantisation implemented on a scalable array of highly optimised MAU Acceleratorâ„¢ cores. The results of a comparison using a DeepSpeech benchmark will demonstrate this FPGA advantage in a range of applications including speech to text transcription and time series analysis.

Intel Spoken Language Technologies Summit (iSLTS) 2019 Keynote

October 23, 2019

Presentation, Intel, Folsom, California

Recurrent neural networks (RNNs), including workloads like recommender systems, machine translation, speech synthesis and speech transcription, form a significant proportion of data center deep learning inference. Productionized versions of these models typically contain tens to hundreds of millions of parameters but some have been scaled to billions of parameters given enough data. Increasing the size of a model also increases its compute and memory requirements. Reducing the computational cost of these models translates directly to cost and energy savings for service operators.