Keynotes

Keynote Speakers

José Roberto Alvarez, Intel, USA

The Future of Reconfigurable Computing: More than Meets the Eye

 

Abstract

Reconfigurable computing, as a concept, has been around for a long time. Many researchers have explored this jungle with some success extracting treasures, but we still do not have a way of building a playground that will satisfy all application domains. This keynote will explore new technologies that will open the door to more solutions and propose how to move forward towards a better future for implementing reconfigurable computing. 

 

Bio

José Roberto Alvarez is Senior Director at Intel Programmable Solutions Group in San Jose, California, where he is leading the PSG CTO Office, defining and implementing long term FPGA research strategy and roadmaps. He started his career at Philips Laboratories and has been involved in architecting, designing and implementing technology products for a variety of industries such as broadcast, consumer, post-production and computer graphics for companies including Philips, Broadcom, S3, Maxim, Xilinx, and four successful start-ups in Silicon Valley. He has actively participated in major industry inflection points in the last 25 years, most notably in the development of multiple video coding standards, desktop video graphics, and Extensible Processing FPGA platforms. His research interests include FPGA advanced architectures and development tools, immersive media technologies and volumetric coding. Mr. Alvarez earned Bachelor's and Master's degrees in Electrical Engineering with distinction from The City University of New York. His work has been granted 53 patents.


Tim Güneysu, Ruhr-University Bochum, Germany

Security Challenges with Modern Reconfigurable Devices

 

Abstract

Reconfigurable devices are popular platforms to host high-security applications due to their unique combination of strong attack resilience of hardware devices and the flexibility of in-field upgrades. Even with the increasing number of security features that are added by FPGA manufactures, designers are faced with many challenges in this context. This ranges from the efficient and protected implementation of long-term secure cryptographic primitives, their leakage-free and tamper-resilient realization in the fabric, as well as the secure and seamless integration with the underlying FPGA security infrastructure and architecture. In particular, recent multi-tenant and cloud-based applications have exposed modern FPGA architectures to new types of attacks.
This talk provides an insight and overview on the security challenges, most recent attacks and an outlook to build secure FPGA-based applications.  

 

Bio

Tim Güneysu is professor and head of the chair for Security Engineering at Ruhr-Universität Bochum in Germany. Since 2016 he is part of the Cyber Physical Systems (CPS) division of the German Research Center for Artificial Intelligence (DFKI), Bremen. Prior to his current positions, he was senior researcher with UMass Amherst, assistant and visiting professor at Ruhr-Universität Bochum and the Hubert Curien Lab in Saint-Etienne, respectively. His primary research topics targeting all aspects of secure system engineering with particular focus on long-term secure cryptographic providers, the design of security architectures for embedded systems and related aspects of hardware security. In this area of applied security and cryptography, Tim published and contributed to more than 100 peer-reviewed journal and conference publications. He is managing director of TCHES, associate editor of IEEE Transaction on Computers and served as program co-chair of CHES 2015, TRUSTED 2016 and CARDIS 2019.


Gabriel Weisz, Microsoft, USA

Global-Scale FPGA-Accelerated Deep Learning Inference with Microsoft's Project Brainwave

 

Abstract

The computational challenges involved in accelerating deep learning have attracted the attention of computer architects across academia and industry. While many deep learning accelerators are currently theoretical or exist only in prototype form, Microsoft's Project Brainwave is in massive-scale production in data centers across the world. Project Brainwave runs on our Catapult networked FPGAs, which provide latency that is low enough to enable "Real-time AI" - deep learning inference that is fast enough for interactive services and achieves peak throughput at a batch size of 1. Project Brainwave powers the latest version of Microsoft's Bing search engine, which uses cutting-edge neural network models that are much larger than the neural networks used in typical benchmarks.

In this talk, I'll discuss how Project Brainwave's FPGA-based and software components work together to accelerate both first-party workloads - like Bing search - and third-party applications using neural network models, like high energy physics and manufacturing quality control. I'll also talk about how FPGAs are the perfect platform for the fast-changing world of deep neural networks, since their reconfigurability allows us to update our accelerator in place to keep up with the state of the art.

 

Bio

Gabriel Weisz is a Principal Hardware Engineer at Microsoft, and works on compiling neural networks within Project Brainwave. He holds Ph.D. and M.S. degrees in Computer Science from Carnegie Mellon University, and a B.S. with a double major in Computer Science and Electrical Engineering from Cornell University. He's currently co-track chair of the machine learning track at the ReConFig conference, and serves on the program committee of the FPGA, FPL, and FCCM conferences.

© 2019 INAOE

Search