We are very pleased to announce the following EDBT/ICDT 2018 keynote speakers.

Large Scale Machine Learning: Where Do Relational Systems Fit In? (by Chris Jermaine)

Abstract: Modern, deep neural networks can do an amazing things, performing tasks that would have seemed impossible a decade ago. While there have been significant advances in modeling and algorithms, arguably much of the expansion in the capabilities of deep learning systems has been enabled by advances in computer systems. On the hardware side, GPUs have made it possible to train very large and deep networks in reasonable time. On the software side, modern systems for automatic differentiation such as Google's TensorFlow make it easy for a programmer to implement learning algorithms for even the most complex networks.
However, one area in which systems for machine learning are wanting is in their support for distributed learning---scaling machine learning algorithms to large clusters of machines. In contrast to modern relational systems which scale to multiple machines quite well out of the box, getting machine learning computations to work in a distributed setting is often very challenging. The dominant abstraction for distributed learning is the so-called "parameter server", which is essentially a key-value store. A programmer wishing to distribute a learning computation on top of a parameter server often ends up manually distributing computation and data to machines and/or processors. This is not always easy, and can result in brittle codes that are not easily adapted to changes in hardware and/or data. In this talk, I will argue that in adopting a key-value architecture, designers of machine learning systems are making some of the same errors made by the designers of early data management systems. Instead, they would be better off building on top of a relational-style backend, utilizing decades of work on ensuring the independence of code from the underlying data and hardware.

Chris Jermaine Chris Jermaine is a Professor of Computer Science at Rice University in Houston, Texas, where he builds systems for large scale analytics and data management and also works on applied machine learning. He is the recipient of a US National Science Foundation CAREER award, an Alfred P. Sloan Foundation Fellowship, a SIGMOD best paper award, an ICDE best paper award, and a SIGKDD best paper runner up award. He is the co-general chair of the 2018 ACM SIGMOD Conference in Houston. In his spare time, Chris enjoys hiking, climbing, and whitewater boating. In one particular exploit, Chris and his wife floated a whitewater raft (home-made from scratch using a sewing machine, glue, and plastic) over 100 miles down the Nizina River (and beyond) in Alaska.

Fine-Grained Algorithms and Complexity (by Virginia Vassilevska Williams)

Abstract: A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Unfortunately, for many central problems, the best known running times are essentially those of their classical algorithms from the 1950s and 1960s. For years, the main tool for explaining computational difficulty have been NP-hardness reductions, basing hardness on P\neq NP. However, if one cares about exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem is already solvable in polynomial time. In recent years, a new theory has been developed, based on ``fine-grained reductions'' that focus on exact running times. In this talk I will give an overview of the current progress in this area, and will highlight some exciting new developments and their relationship to database theory.

Virginia Vassilevska Williams Virginia Vassilevska Williams is the Steven and Renee Finn Career Development Associate Professor at MIT CSAIL. She obtained her Ph.D. from Carnegie Mellon University in 2008. After research and postdoctoral positions at the IAS in Princeton, UC Berkeley and Stanford, she spent 3.5 years as an assistant professor at Stanford University before joining MIT in early 2017. She is the recipient of an NSF CAREER award and an Alfred P. Sloan Research Fellowship, and will give an invited lecture at the International Congress of Mathematicians in 2018.

Join Algorithms: From External Memory to the BSP (by Ke Yi)

Abstract: Database systems have been traditionally disk-based, which had motivated the extensive study on external memory (EM) algorithms. However, as RAMs continue to get larger and cheaper, modern distributed data systems are increasingly adopting a main memory based, shared-nothing architecture, exemplified by systems like Spark and Flink. These systems can be abstracted by the BSP model (with variants like the MPC model and the MapReduce model), and there has been a strong revived interest in designing BSP algorithms for handling large amounts of data.
With hard disks starting to fade away from the picture, EM algorithms may now seem less relevant. However, we observe that the recently developed join algorithms under the BSP model have a high degree of resemblance with their counterparts in the EM model. In this talk, I will present some recent results on join algorithms in the EM and BSP model, examine their relationships, and discuss a general theoretical framework for converting EM algorithms to the BSP.

Ke Yi Ke Yi is an Associate Professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. He obtained his Bachelor's degree from T singhua University (2001) and PhD from Duke University (2006), both in computer science. His research spans theoretical computer science and database systems. He has received a Google Faculty Research Award (2010), a SIGMOD Best Demonstration Award (2015), and the SIGMOD Best Paper Award (2016). He currently serves as an Associate Editor of ACM Transactions on Database Systems and IEEE Transactions on Knowledge and Data Engineering.

Abstract: In many modern data management scenarios, data is subject to frequent changes. In order to avoid costly re-computing query answers from scratch after each small update, one can try to use auxiliary relations that have been computed before. Of course, the auxiliary relations need to be updated dynamically whenever the data changes.
Dynamic complexity theory studies which queries and auxiliary relations can be updated in a highly parallel fashion, that is, by constant-depth circuits or, equivalently, by first-order formulas or the relational algebra. After gently introducing dynamic complexity theory, I will discuss recent results of the area with a focus on the dynamic complexity of the reachability query.

Thomas Zeume

Thomas Zeume received his PhD from the computer science department at TU Dortmund University in 2015. His research focuses on the connections of logic, complexity theory, and database theory. Thomas received the E.W. Beth Dissertation Prize for his dissertation in 2016 and a Best Student Paper Award at the Mathematical Foundations of Computer Science conference 2014.

Data privacy statement