Jeffrey Bilmes
Jeffrey Bilmes is a professor at the Department of Electrical Engineering at the University of Washington. Prof. Bilmes’s primary interests lie in statistical modeling (particularly graphical model approaches) and signal processing for pattern classification, speech recognition, language processing, bioinformatics, machine learning, submodularity in combinatorial optimization and machine learning, active and semi-supervised learning, and audio/music processing. Prof. Bilmes has also pioneered (starting in 2003) the development of submodularity within machine learning, and he received a best paper award at ICML 2013, a best paper award at NIPS 2013, and a best paper award at ACM-BCB 2016, all in this area. In 2014, Prof. Bilmes also received a most influential paper in 25 years award from the International Conference on Supercomputing, given to a paper on high-performance matrix optimization.
Victor-Emmanuel Brunel
Victor-Emmanuel Brunel received his PhD in 2014 under the supervision of A. Tsybakov and A. Goldenshluger, on nonparametric estimation of convex bodies. He visited the Cowles Foundation for Research in Economics at Yale University in 2014-2015, as a visiting faculty, and he joined the department of mathematics at MIT in 2015-2018 as an Instructor. He is now an Assistant Professor in Statistics at ENSAE ParisTech. His main research interests are the interplay between statistics and geometry, nonparametric inference, and learning theory.
Sergey Levine
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more. His work has been featured in many popular press outlets, including the New York Times, the BBC, MIT Technology Review, and Bloomberg Business.
Aarti Singh
Aarti Singh received her B.E. in Electronics and Communication Engineering from the University of Delhi in 2001, and M.S. and Ph.D. degrees in Electrical Engineering from the University of Wisconsin-Madison in 2003 and 2008, respectively. She was a Postdoctoral Research Associate at the Program in Applied and Computational Mathematics at Princeton University from 2008-2009 before joining Carnegie Mellon as an Associate Professor.
Michal Valko
Michal is a machine learning scientist at DeepMind Paris and SequeL team at Inria Lille – Nord Europe, France. He also teaches the master course Graphs in Machine Learning at l’ENS Paris-Saclay. Michal is primarily interested in designing algorithms that require as little human supervision as possible. Another important feature of machine learning algorithms is the ability to adapt to changing environments. Along these lines he is working in domains that are able to deal with minimal feedback, such as online learning, bandit algorithms, semi-supervised learning, and anomaly detection. Most recently he has worked on sequential algorithms with structured decisions, where exploiting the structure leads to provably faster learning. Structured learning requires more time and space resources, and therefore Michal’s most recent work includes efficient approximations, such as graph and matrix sketching with learning guarantees. In the past, the common thread of Michal’s work has been adaptive graph-based learning and its application to real-world applications such as recommender systems, medical error detection, and face recognition. He received his Ph.D. in 2011 from the University of Pittsburgh, USA, under the supervision of Miloš Hauskrecht and after was a postdoc with Rémi Munos, before taking a permanent position at Inria in 2012 and a habilitation from l’ENS Paris-Saclay, France in 2016.
Cheng Zhang
Cheng Zhang is a researcher at the Machine Intelligence and Perception group at Microsoft Research Cambridge, UK. Before joining Microsoft, she was at the Statistical Machine Learning group, at Disney Research Pittsburgh, located at Carnegie Mellon University, USA. She has received her PhD from KTH Royal Institute of Technology, Sweden. She is interested in both machine learning theory, including variational inference, deep generative models and causality, as well as various machine learning applications with social impact.