Keynote Speakers

The following speakers have graciously accepted to give keynotes at ACL 2020.

Kathleen R. McKeown

Kathleen R. McKeown is the Henry and Gertrude Rothschild Professor of Computer Science at Columbia University and the Founding Director of the Data Science Institute, serving as Director from 2012 to 2017. She is also an Amazon Scholar. In earlier years, she served as Department Chair 1(998-2003) and as Vice Dean for Research for the School of Engineering and Applied Science (2010-2012). A leading scholar and researcher in the field of natural language processing, McKeown focuses her research on the use of data for societal problems; her interests include text summarization, question answering, natural language generation, social media analysis and multilingual applications. She has received numerous honors and awards, including American Academy of Arts and Science elected member, American Association of Artificial Intelligence Fellow, a Founding Fellow of the Association for Computational Linguistics and an Association for Computing Machinery Fellow. Early on she received the National Science Foundation Presidential Young Investigator Award, and a National Science Foundation Faculty Award for Women. In 2010, she won both the Columbia Great Teacher Award—an honor bestowed by the students—and the Anita Borg Woman of Vision Award for Innovation.

Rewriting the Past: Assessing the Field through the Lens of Language Generation

Abstract: In recent years, we have seen tremendous advances in the field of natural language processing through the use of neural networks. In fact, they have done so well, that they have almost succeeded in rewriting the field as we knew it. In this talk, I examine the state of the field and its link to the past, with a focus on language generation of many forms. I ask where neural networks have been particularly successful, where approaches from the past might still be valuable, and where we need to turn in the future if we are to go beyond our current success. To answer these questions, this talk will feature clips from a series of interviews I carried out with experts in the field.

Josh Tenenbaum

Josh Tenenbaum is Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). He received his PhD from MIT in 1999, and taught at Stanford from 1999 to 2002. His long-term goal is to reverse-engineer intelligence in the human mind and brain, and use these insights to engineer more human-like machine intelligence. His current research focuses on the development of common sense in children and machines, the neural basis of common sense, and models of learning as Bayesian program synthesis. His work has been published in Science, Nature, PNAS, and many other leading journals, and recognized with awards at conferences in Cognitive Science, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Distinguished Scientific Award for Early Career Contributions in Psychology from the American Psychological Association (2008), the Troland Research Award from the National Academy of Sciences (2011), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2016), the R&D Magazine Innovator of the Year award (2018), and a MacArthur Fellowship (2019). He is a fellow of the Cognitive Science Society, the Society for Experimental Psychologists, and a member of the American Academy of Arts and Sciences.

Title: Cognitive and computational building blocks for more human-like language in machines
Abstract: Humans learn language building on more basic conceptual and computational resources that we can already see precursors of in infancy. These include capacities for causal reasoning, symbolic rule formation, rapid abstraction, and commonsense representations of events in terms of objects, agents and their interactions. I will talk about steps towards capturing these abilities in engineering terms, using tools from hierarchical Bayesian models, probabilistic programs, program induction, and neuro-symbolic architectures. I will show examples of how these tools have been applied in both cognitive science and AI contexts, and point to ways they might be useful in building more human-like language, learning and reasoning in machines.