Confirmed Talks

Programme

Time (CEST) Speaker Title
- - Introduction
10:00-10:30 Prof. Philip Leong (University of Sydney) A Fully Parallel DNN Implementation and its Application to Automatic Modulation Classification
10:30-11:00 Prof. Jürgen Becker (KIT) Neuromorphic FPGA Integration – HPC, Reliability and NN as Key Enablers
11:00-11:30 Dr. Partha Maji (ARM) Robust DNNs using probabilistic approaches
11:30-12:00 Dr. Christos Kyrkou (KIOS) Efficient Deep Vision for Aerial Visual Understanding
12:00-12:30 Mr. Erwei Wang (Imperial College London) Rethinking BNN Inference and Training on Embedded FPGAs
12:30-13:00 Dr. Mohamed Abdelfattah (Samsung AI) Getting the most out of FPGAs for Deep Learning
Closing Remarks

Programme Overview

    Prof. Philip Leong
  • 10:00-10:30 : Prof. Philip Leong - University of Sydney

    Title:

    A Fully Parallel DNN Implementation and its Application to Automatic Modulation Classification

    Abstract:

    The high computational complexity of deep neural networks (DNNs) has led to strong interest in exploring low precision as a way to improve implementations. Unfortunately, very low precision activations and weights can have a significant impact on accuracy. This work demonstrates an efficient DNN which uses throughput matching where higher precision on certain layers can be used to recover this accuracy. This is applied to the domain of automatic modulation classification for radio signals leveraging the RF capabilities offered by the Xilinx ZCU111 RFSoC platform. The implemented networks achieve high-speed real-time performance with a classification latency of ≈8µs, and an operational throughput of 488k classifications per second. On the open-source RadioML dataset, we demonstrate how to recover 4.3% in accuracy using our technique.

    Bio:

    Philip Leong received the B.Sc., B.E. and Ph.D. degrees from the University of Sydney. In 1993 he was a consultant to ST Microelectronics in Milan, Italy working on advanced flash memory-based integrated circuit design. From 1997-2009 he was with the Chinese University of Hong Kong. He is currently Professor of Computer Systems in the School of Electrical and Information Engineering at the University of Sydney, Visiting Professor at Imperial College and Chief Technology Advisor to ClusterTech.

  • Prof. Juergen Becker
  • 10:30-11:00 : Prof. Jürgen Becker - KIT

    Title:

    Neuromorphic FPGA Integration – HPC, Reliability and NN as Key Enablers

    Abstract:

    The field of embedded electronic system integration is emerging and is challenging today´s silicon technology to its limits. Here, increasing VLSI as well as embedded FPGA technology of cooperating computational and physical elements is necessary, e. g. in smart on-demand automatized environments as diverse as space, avionics, automotive, chemical processes, civil infrastructure, energy, healthcare, manufacturing (Industry 4.0), communication/consumer appliances and even in monolythic dataacquisition solutions for large particle physics accelerator experiments. In such highly integrated multidomain electronic system realizations the emphasis tends to be more and more on smart, real-time, reliable and distributed HPC incl. Neuronal Network (NN) computational solutions, resulting in newly decentralized/centralized operating intelligent, interconnected, and silicon technology integrated systems. Existing technology must evolve in order to meet these requirements, whereas HPC, Reliability and NN integration play key roles in this process. Multipurpose adaptivity incl. ML and reliability are crucial, e.g. also in scaling down silicon technologies for future computing options incl. optimized application integration on FPGAs well as new reconfigurable neuromorphic NN architecture templates. The talk will discuss corresponding challenges of such neuromorphic FPGA platforms incl. also Multi- Core (MC) aspects. This includes the discussion of two FPGA-based applications, e.g. for optimized NN integration in automotive within EPI (European Processor Initiative - https://www.european-processor- initiative.eu ) and for data acquisition solutions in the Belle-II accelerator experiment (https://www.youtube.com/watch?v=nGCrrgXSEOk&feature=youtu.be , https://www.belle2.org ).

    Bio:

    Jürgen Becker received the Diploma and Ph.D. (Dr.-Ing.) degree from Technical University Kaiserslautern, Germany. He is full professor for embedded electronic systems and Head of the Institute for Information Processing Technologies (ITIV) at the Karlsruhe Institute of Technology (KIT). From 2005-2009 he has been appointed as Vice President for Education at Universitaet Karlsruhe (TH) and Chief Higher Education Officer (CHEO) at KIT from 2009-2012. Since 2012 till 2014 he served as Secretary General of CLUSTER, an association of 12 leading Technical Universities in Europe. His research interests include Hardware/Software Systems-on-Chip (SoC), Cyber-Physical Systems (CPS), Heterogenous Multicore (MC) Architectures and Design Methods, Reconfigurable Computing, with application in Embedded Systems (Automotive, Industry 4.0, Avionics, HPC Scientific Applications and Experiments). He authored more than 400 papers in peer-reviewed international journals and conferences. Prof. Becker is active in numerous international conferences, as Chairman in TPC & Steering Commitees, e. g. IEEE ISVLSI, IEEE SOCC, RAW, FPL, PATMOS, IFIP VLSI-SoC, DATE, SBCCI, ARC, FCCM, FPT, among others

  • Partha Maji
  • 11:00-11:30 : Dr. Partha Maji - ARM

    Title:

    Robust DNNs using probabilistic approaches

    Abstract:

    Deep neural networks (DNNs) deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. In this particular setting current state-of-the-art DNNs predict with highly confident outputs which are often wrong. In this talk, we will explore how probabilistic approaches might help to mitigate the issue of domain shift. In particular, we will explore the concept of uncertainties in the context of common computer vision tasks and extend the idea to specific solutions which are commonly explored in the research community. We will then conclude our discussion with the question of how do we design hardware that is better suited for emerging probabilistic DNNs.

    Bio:

    Partha Maji is a Principal Research Scientist at Arm's Machine Learning Research Lab based in Cambridge, where he leads the core research on the efficient implementation of probabilistic machine learning on resource-constrained devices. He received a PhD in computer science from the University of Cambridge, where he focussed on optimisation and acceleration of deep neural networks. Prior to that, he received an MSc in system-on-chip design from the University of Edinburgh. Partha spent a decade in the semiconductor industry as a CPU subsystem architect and an ASIC design engineer. He has extensive experience with the end-to-end chip design process through multiple tape-outs of low-power chips at 65/40/28/22nm deep-submicron CMOS process technology. His current research interests lie in multiple disciplines that bridge the topics of machine learning, mobile/embedded systems, computer architecture and hardware implementation. Partha has received several excellence awards from the industry including a Mentor Graphics prize for outstanding achievement in the master’s degree. Partha also received multiple accolades for his research on on-chip interconnect including an award from Epson Europe and the IET, UK. He was also recognized by the European Neural Network Society for high-quality contribution in machine learning research. Partha was a recipient of the prestigious UK Chevening scholarship.

  • Christo Kyrkou
  • 11:30-12:00 : Dr. Christos Kyrkou - KIOS

    Title:

    Efficient Deep Vision for Aerial Visual Understanding

    Abstract:

    Deep Learning based perception can provide state-of-the-art accuracy for remote sensing technologies such as Unmanned Aerial Vehicles (UAVs), potentially enhancing their capabilities in a wide spectrum of applications. However, the integration of deep learning introduces heavy computational requirements, preventing the deployment of such computer vision algorithms in many scenarios that impose low-latency constraints on inference. This talk will highlight the potential of small neural networks to significantly reduce the computational workload of deep-learning-based detection systems and improve the inference time while offering competitive accuracies.

    Bio:

    Christos Kyrkou is a Research Associate at the KIOS Research & Innovation Center of Excellence at the University of Cyprus. He received the B.Sc., M.Sc., and PhD degrees in Computer Engineering in 2008, 2010, and 2014 respectively from the University of Cyprus. He is an author/co-author of more than 45 scientific publications in international peer reviewed conferences and journals. He has worked in European and nationally funded projects related to visual monitoring with UAVs, smart camera networks, and hardware acceleration of pattern recognition algorithms, and is currently leading the research efforts of KIOS CoE for the development of robust visual AI for autonomous vehicles under the H2020 project CARAMEL. His research interests include computer vision, smart cameras, real-time embedded vision systems, and deep learning/machine learning for visual intelligence. He is also a graduate of the UDACITY Self-Driving Car Engineering Nanodegree program and has received GPU grants from NVIDIA in support for his research.

  • Erwei Wang
  • 12:00-12:30 : Mr. Erwei Wang - Imperial College London

    Title:

    Rethinking BNN Inference and Training on Embedded FPGAs

    Abstract:

    With the growing availability of high-performance edge devices comes rising demand for on-device inference and even training. In this talk, I will introduce our recent research progress on approximation-based deep neural network inference and training methods which increase resource efficiency on embedded-scale FPGAs. Our first project is LUTNet, an end-to-end hardware-software framework for the construction of area-efficient binary neural network accelerators using FPGAs' native LUTs as inference operators. We demonstrate that the exploitation of LUT flexibility allows for far heavier pruning than possible in prior works, resulting in significant area savings while achieving the same accuracy. To facilitate on-device learning of binary neural network parameters, we introduce a low-cost training strategy exhibiting aggressive memory footprint reductions and energy savings against the former state of the art.

    Bio:

    Erwei Wang is a PhD student in the Department of Electrical and Electronic Engineering’s Circuits and Systems group at Imperial College London. His research interests include deep neural networks, computer vision systems, and high-performance computing architectures, with an emphasis on improving speed and energy efficiency for custom hardware implementation.

  • Mohamed Abdelfattah
  • 12:30-13:00 : Dr. Mohamed Abdelfattah - Samsung AI

    Title:

    Getting the most out of FPGAs for Deep Learning

    Abstract:

    Field-Programmable Gate-Arrays (FPGAs) have many strengths and weaknesses when it comes to accelerating deep neural networks (DNNs). In this talk I will give my perspective on how to play on FPGA strengths to reach maximum efficiency with FPGAs. First, I will describe our AutoML-based codesign methodology that produces customized accelerator-DNN pairs that simultaneously boost both accuracy and efficiency. In the second part of my talk I will look further ahead and describe my views on where FPGA architecture needs to go next to keep up with the compute demands of deep learning.

    Bio:

    Mohamed is a Senior Researcher at the Samsung AI Center in Cambridge UK, working on the codesign of deep learning algorithms and hardware. Before that, he was at Intel building an FPGA-based accelerator and compiler for deep neural networks. Mohamed did his PhD at the University of Toronto, during which he was awarded the Vanier Canada Graduate scholarship and three best paper awards for his work on embedded networks-on-chip for FPGAs.

  • Closing Remarks