Deep Human Signal Processing

Prof. Björn Schuller, Imperial College London, UK, and University of Augsburg, Germany

Abstract: Intelligent Human-Machine Communication and Interaction has benefitted largely from the developments in deep learning over the last years. In this tutorial lecture, we shall deal with the according change in the human signal processing landscape. This includes moving from expert-crafted features to transferred and self-learnt representations and architectures of deep neural networks, from traditional pre-processing to deep source separation for the enhancement of signals of interest, to advanced back-end decision making. To enable these approaches, we shall discuss convolutional and recurrent network topologies and related topics such as attention modelling, or connectionist temporal classification. In addition, we will discuss learning data-augmentation such as by generative adversarial methods. To move towards real-life application in challenging HCI frameworks, we will further explore optimal fusion of modalities, life-long learning approaches, and combinations of active and reinforced learning to best model users over repeated interactions. Examples of application will focus on spoken language and video-based communication such as from the domain of Affective Computing.

Bio: Björn Schuller (IEEE M 2005 – IEEE SM 2015 – IEEE Fellow 2018) received the Diploma in 1999, the Doctoral degree for thestudy on Automatic Speech and Emotion Recognitionin 2006, and the Habilitation and Adjunct TeachingProfessorship in the subject area of Signal Process-ing and Machine Intelligence in 2012, all from theTechnische Universit ̈at Munchen, Munich, Germany,all in electrical engineering and information technol-ogy. He is currently a Reader in machine learningwith the Department of Computing, Imperial CollegeLondon, London, U.K., the Full Professor and Headof the Chair of Embedded Intelligence for Health Care and Wellbeing, Augs-burg University, Augsburg, Germany, and Centre Digitisation.Bavaria, Garch-ing, Germany, and an Associate of the Swiss Center for Affective Sciences withthe University of Geneva, Geneva, Switzerland. He (co)authored 5 books andmore than 600 publications in peer reviewed books, journals, and conferenceproceedings leading to more than 18 000 citations (h-index=65). Prof. Schulleris the President-Emeritus of the Association for the Advancement of AffectiveComputing, elected member of the IEEE Speech and Language ProcessingTechnical Committee, and member of the ACM and ISCA.

Robust and privacy preserving multimodal learning with body-camera signals

Prof. Andrea Cavallaro, Queen Mary University of London and The Alan Turing Institute, UK

Abstract: High-quality miniature cameras and associated sensors, such as microphones and inertial measurement units, are increasingly worn by people and embedded in robots. The pervasiveness of these ego-centric sensors is offering countless opportunities in developing new applications and in improving services through the recognition of intentions, actions, activities and interactions. However, despite this richness in sensing modalities, inferences from ego-centric data are challenging due to unconventional and rapidly changing capturing conditions. Furthermore, personal data generated by and through these sensors facilitate non-consensual, non-essential inferences when data are shared with social media services and health apps. In this talk I will first present the main challenges in learning, classifying and processing body-camera signals and then show how exploiting multiple modalities helps address these challenges. In particular, I will discuss action recognition, audio-visual person re-identification and scene recognition as specific application examples using ego-centric data. Finally, I will show how to design on-device machine learning models and feature learning frameworks that enable privacy-preserving services.

Bio: Andrea Cavallaro is Professor of Multimedia Signal Processing and the founding Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He is Fellow of the International Association for Pattern Recognition (IAPR) and Turing Fellow at The Alan Turing Institute, the UK National Institute for Data Science and Artificial Intelligence. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering Teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro is Editor-in-Chief of Signal Processing: Image Communication; Chair of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee; an IEEE Signal Processing Society Distinguished Lecturer; and an elected member of the IEEE Video Signal Processing and Communication Technical Committee. He is Senior Area Editor for the IEEE Transactions on Image Processing and Associate Editor for the IEEE Transactions on Circuits and Systems for Video Technology. He is a past Area Editor for the IEEE Signal Processing Magazine (2012-2014) and past Associate Editor for the IEEE Transactions on Image Processing (2011-2015), IEEE Transactions on Signal Processing (2009-2011), IEEE Transactions on Multimedia (2009-2010), IEEE Signal Processing Magazine (2008-2011) and IEEE Multimedia. He is a past elected member of the IEEE Multimedia Signal Processing Technical Committee and past chair of the Awards committee of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee. Prof. Cavallaro has published over 270 journal and conference papers, one monograph on Video tracking (2011, Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer). 

Variational Laws of Perception

Prof. Marco Gori, University of Siena, Italy

Abstract: By and large, most studies of machine learning and pattern recognition are rooted in
the framework of statistics. This is primarily due to the way machine learning is traditionally
posed, namely by a problem of extraction of regularities from a sample of a probability distribution. This course promotes a truly different way of interpreting the learning of perceptual tasks that relies on system dynamics. We promote a view of learning as the outcome of laws of nature that govern the interactions of intelligent agents with their own environment. We reinforce the underlying principle that the
acquisition of cognitive skills by learning obeys information-based laws on these interactions, which hold regardless of biology. These laws are derived in a variational framework that is very much related to the principle of least action in physics. After a general introduction, the emphasis is on visual perception and on the emergence of visual features.

Bio: Marco Gori received the Ph.D. degree from University of Bologna, working partly at McGill University, Montréal. He
is full professor of computer science at the University of Siena, head of the Siena Artificial Intelligence Lab, and cofounder
of Questit. He was Chairman of the Italian Chapter of the IEEE Computational Intelligence Society, and
President of the Italian Association for Artificial Intelligence. He is a fellow of the IEEE, EurAI, and IAPR.

Current and Future Applications of Brain-Computer Interfaces

Dr. Christoph Guger, Founder and CEO of g.tec medical engineering GmbH, Austria

Abstract: Brain-computer interfaces are realized with non-invasive and invasive technology and for many different applications. The talk will show the principles that can be used and will highlight important applications like stroke rehabilitation, assessment of brain functions in patients with disorders of consciousness and the functional mapping of the eloquent cortex with high-gamma activity in neurosurgery applications. Furthermore, closed-loop experiments with brain and body stimulation will be explained and results of international research projects for controlling avatars by thoughts will be shown. New ideas coming from the international br41n.io BCI Hackathon series will be explained.

Workshop: A “Running real-time brain-computer interface experiments” workshop will be organized to show the correct setup of a brain-computer interface. This includes the correct assembling of the EEG electrodes and the data quality control. Then the brain-computer interface will be calibrated with user specific EEG response. This calibration signal can be used to control applications like a Speller, to paint or to control a Sphero robot. The programming environment allows to configure new BCI applications in .NET, C#, Python or from MATLAB/Simulink.

Bio: Christoph Guger is the founder and CEO of g.tec medical engineering GmbH. He studied Biomedical Engineering at the Technical University of Graz, Austria and the John Hopkins University in Baltimore, USA. During his studies, he concentrated on BCI systems and developed many of the early foundations for bio-signal acquisition and processing in real-time. g.tec produces and develops BCIs that help disabled people communicate or control their environments by their thoughts and regain motor functions after stroke. The technology is also used to optimize neurosurgical procedures with high-gamma mapping techniques. He is running several international BCI research projects.

Brain Computer Interface Systems – Overview, Design Challenges and Recent Research Developments

Prof. Sadasivan Puthusserypady, Department of Health Technology, Technical University of Denmark, Denmark

Abstract: Brain Computer Interface (BCI) systems directly uses the brain signals, especially the electroencephalogram (EEG) to allow the users to operate an external device (computer/machine) without any muscles or peripheral nerves. It translates the EEG signals into comprehensive commands which are necessary to run an external machine or game interface. BCI is an assistive technology and has been used in a wide variety of physical and mental disorders, thus improving their life quality by providing new opportunities to them. This is achieved by incorporating real-time signal processing methods for the feature extraction and classification in EEG-based BCIs.  This tutorial will provide an overview of EEG-based BCI systems, EEG signals that can be detected as markers of mental activity, signal processing challenges in feature extraction and classification algorithms. The tutorial will also highlight some of our group’s recent research achievements in devising cost effective, high quality and user friendly BCI systems for disabled as well as elderly people. They include BCI spellers (communication), BCI assisted wheelchair control (locomotion), BCI based schemes for enhancing the attention ability in ADHD children (neurorehabilitation), BCI controlled functional electric stimulation (FES) system for neurorehabilitation of post-stroke patients, to list a few. The tutorial will conclude by highlighting BCI’s potential especially in medical applications.

Bio: Sadasivan Puthusserypady received his B.Tech (Electrical Engineering) and M.Tech (Instrumentation and Control Systems Engineering) from the University of Calicut, India. He obtained his Ph.D.  from the Indian Institute of Science, Bangalore, India. He was a Research Associate at the Department of Psychopharmacology, NIMHANS, Bangalore, India, during 1993-1996. From 1996 to 1998, he was a Postdoctoral Research Fellow at the Communications Research Laboratory, McMaster University, Canada. In 1998, he joined Raytheon Systems Canada Ltd., Waterloo, Canada, as a Senior Systems Engineer. In 2000, he moved to the National University of Singapore and worked as an Assistant Professor in the Department of Electrical and Computer Engineering until 2009. Currently, he is an Associate Professor at the Department of Health Technology, Technical University of Denmark. At the University, he holds the position as the BCI group leader. He was a visiting faculty at the School of Electronics, Electrical Engineering and Computer Science, Queens University, Belfast, UK, May-August, 2008 and October 2009. His research interests are in Biomedical Signal Processing, Brain-Computer Interfaces (BCIs) and Home Health Care systems. He has published over 150 research publications in peer reviewed high impact international journals and conferences as well as supervising several master and PhD students.  He is a senior member of the IEEE, Associate Editor for the Journal of Medical Imaging and Health Informatics and Journal of Nonlinear Biomedical Physics. He was track chair, program chair and session chair for many international and national conferences. He held the positions as International Advisory Panel Member, Organizing committee member, and TPC member for many international and national conferences.

Electrovibration: DisplayingTactile Effects Through Touch Screens

Prof. Cagatay Basdogan, Koc University, Turkey

Abstract: When an alternating voltage is applied to the conductive layer of a capacitive touch screen, a uniform and attractive electrostatic force field is generated between finger pad and the touch screen in the direction normal to the its surface. Although the magnitude of this electrostatic force is small relative to the normal force applied by the finger, it results in a perceivable frictional force in tangential direction when the finger pad slides on the touch screen. This frictional force can be modulated by altering the magnitude, frequency, and phase of the voltage signal applied to the conductive layer of the touch screen. In this talk, I will discuss the technology and contact mechanics underlying electrovibration that converts a passive touch screen to active one, our perception of tactile stimuli displayed through an active touch screen,its potential applications, and the challenges ahead of us in making them available through commercial systems.

Bio: Prof. Basdogan is a member of faculty in College of Engineering at Koc University since 2002. Before joining to Koc University, he was a senior member of technical staff at Information and Computer Science Division of NASA-Jet Propulsion Laboratory of California Institute of Technology (Caltech) from 1999 to 2002. At JPL, he worked on 3D reconstruction of Martian models from stereo images captured by a rover and their haptic visualization on Earth. He moved to JPL from Massachusetts Institute of Technology (MIT) where he was a research scientist and principal investigator at MIT Research Laboratory of Electronics and a member of the MIT Touch Lab from 1996 to 1999. At MIT, he was involved in the development of algorithms that enable a user to touch and feel virtual objects through a haptic device. He received his Ph.D. degree from Southern Methodist University in 1994 and worked on medical simulation and robotics for Musculographics Inc. at Northwestern University Research Park for two years before moving to MIT. Prof.Basdogan is currently the associate editor in chief (AEiC) of the IEEE Transactions on Haptics and serves in the editorial boards of IEEE Transactions on Mechatronics, Presence: Teleoperators and Virtual Worlds (MIT Press), and Computer Animation and Virtual Worlds (Wiley) journals. In addition to serving in the program and organizational committees of several conferences, he also chaired the IEEE World Haptics Conference in 2011.

Towards human-robot symbiosis: control and co-operation with intelligent tools and vehicles

Prof. David Abbink, Delft University of Technology, The Netherlands

Abstract: Near-future robot capabilities offer great potential for the next evolution in our society – provided we can effectively control, cooperate and co-exist with this technology. My talk will focus on the research of my group at the Delft Haptics Lab, where we aim to better understand how humans physically perform dynamic control tasks with robotic tools or vehicles, and design multi-modal interfaces that facilitate control and co-operation. Our research consists of iterative cycles of human modelling, interface design and human-in-the-loop evaluation, and my talk will also cover these three elements. First, I will present theory and computational models of the human as an adaptive and learning hierarchical controller that can easily move across strategical, tactical and operational levels of tasks. I will illustrate the power of leveraging understanding of low-level perception-action couplings, developed through techniques from neuroscience and system identification. Second, I will propose design guidelines for effective control and co-operation, with a particular focus on haptic shared control as a means to mitigate traditional human-automation issues. Third, I will highlight some ‘lessons learned’ in human factors experiments, and our search for methods and metrics that capture relevant human control behaviour. These three elements will be illustrated through practical applications of our work – from telerobotic arms operating in complex remote environments, to highly-automated driving. I will end with my personal perspectives for the future of our field, including topics like symbiotic driving (mutually adaptive driver-vehicle interaction, which essentially closes the loop on an iterative design and evaluation cycle), and the responsible integration of robotic technologies in our society.

Bio: Prof. dr. ir. David A. Abbink (1977) received his M.Sc. degree (2002) and Ph.D degree (2006) in Mechanical Engineering from Delft University of Technology. He is currently a full Professor at Delft University of Technology, heading the section of Human-Robot Interaction in the department of Cognitive Robotics.  His research interests include system identification, human motor control, shared control, haptic assistance, and human factors. His PhD thesis on haptic support for car-following was awarded the best Dutch Ph.D. dissertation in movement sciences (2006), and contributed to the market release of Nissan’s Distance Control Assist system. David received two prestigious personal grants – VENI (2010) and VIDI (2015) – on haptic shared control for telerobotics and vehicle control. He was co-PI on the H-Haptics programme, where 16 PhD students and 3 postdocs collaborate on designing human-centered haptic shared control for a wide variety of applications. His work has received funding from Nissan, Renault, Boeing, and the Dutch Science Foundation. He and his team have received multiple awards for scientific contributions. David was voted best teacher of his department for seven consecutive years, and best teacher of his faculty twice. David is an IEEE senior member, served as associate editor for IEEE Transaction on Human-Machine Systems, and IEEE Transactions on Haptics, and co-founded of the IEEE SMC Technical Committee on Shared Control. 

A Tour to Deep Neural Network Architectures

Prof. Sergios Theodoridis, Dept. of Informatics and Telecommunication, National and Kapodistrian University of Athens, Greece

Abstract: In this short course, a brief tour to the Deep Neural Networks “land” will be attempted. We will begin from the “beginning”. Neural networks will be “visited”, starting from their late 19th century “spring”, with the discovery of the neuron, and then we will “stop” at the major milestones. The artificial neuron, the perceptron and the multilayer feedforward NN will be the very first to “look” at. Backpropagation and some up to date related optimization algorithms will be discussed. Nonlinear activation functions will be presented, in the context of their effect on the training algorithm’s convergence.  In the sequel, techniques guarding against overfitting will be outlined, such as the dropout approach. The final path will evolve along the most modern advances in the terrain. Convolutional networks (CNN) and recurrent neural networks (RNN) will be “visited” and discussed. Adversarial examples, generative adversarial networks (GANs) and the basics on capsule modules will also be part of the tour. If time allows, some “bridges” will be established that bring together deep networks and the Bayesian spirit.

Bio: Prof. Sergios Theodoridis is currently Professor of Signal Processing and Machine Learning in the Department of Informatics and Telecommunications of the National and Kapodistrian University of Athens and he is the holder of a part time Chair at the Chinese University of Hong Kong, Shenzhen.  His research interests lie in the areas of Adaptive Algorithms, Distributed and Sparsity-Aware Learning, Machine Learning and Pattern Recognition, Signal Processing and Learning for Bio-Medical Applications and Audio Processing and Retrieval. He is the author of the book “Machine Learning: A Bayesian and Optimization Perspective” Academic Press, 2nd Edition, 2020, the co-author of the best-selling book “Pattern Recognition”, Academic Press, 4th  ed. 2009, the co-author of the book “Introduction to Pattern Recognition: A MATLAB Approach”, Academic Press, 2010,  the co-editor of the book “Efficient Algorithms for Signal Processing and System Identification”, Prentice Hall 1993,  and the co-author of three books in Greek, two of them for the Greek Open University.  He is the co-author of seven papers that have received Best Paper Awards including the 2014 IEEE Signal Processing Magazine Best Paper award and the 2009 IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding Paper Award. He is the recipient of the 2017 EURASIP Athanasios Papoulis Award, the 2014 IEEE Signal Processing Society Education Award and the 2014 EURASIP  Meritorious Service Award.   He has served as a Distinguished Lecturer for the IEEE Signal Processing as well as the Circuits and Systems Societies. He was Otto Monstead Guest Professor, Technical University of Denmark, 2012, and holder of the Excellence Chair, Dept. of Signal Processing and Communications, University Carlos III, Madrid, Spain, 2011.  He currently serves as Vice President IEEE Signal Processing Society. He has served as President of the European Association for Signal Processing (EURASIP), as a member of the Board of Governors for the IEEE Circuits and Systems (CAS) Society, as a member of the Board of Governors (Member-at-Large) of the IEEE SP Society and as a Chair of the Signal Processing Theory and Methods (SPTM) technical committee of IEEE SPS. He has served as Editor-in-Chief for the IEEE Transactions on Signal Processing. He is Editor-in-Chief for the Signal Processing Book Series, Academic Press and co-Editor in Chief for the E-Reference Signal Processing, Elsevier. He is Fellow of IET, a Corresponding Fellow of the Royal Society of Edinburgh (RSE), a Fellow of EURASIP and a Life Fellow of IEEE.