A special event featuring the 2018 Turing Award Winners!
Sunday, February 9
5:20 – 7:20 PM
Yoshua Bengio (University of Montreal and Mila)
Geoffrey E. Hinton (Google, The Vector Institute, and University of Toronto)
Yann LeCun (New York University and Facebook)This special two-hour event featured individual talks by each speaker, followed by a panel session.
ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. Bengio is Professor at the University of Montreal and Scientific Director at Mila, Quebec’s Artificial Intelligence Institute; Hinton is VP and Engineering Fellow of Google, Chief Scientific Adviser of The Vector Institute, and University Professor Emeritus at the University of Toronto; and LeCun is Professor at New York University and VP and Chief AI Scientist at Facebook.
AAAI Presidential Address
Yolanda Gil (USC Information Sciences Institute)
Sunday, February 9, 8:30 – 9:20 AM
Yolanda Gil is Director of Knowledge Technologies at the Information Sciences Institute of the University of Southern California and Research Professor in Computer Science and in Spatial Sciences. She is also Director of the USC Center for Knowledge-Powered Interdisciplinary Data Science. She received her M.S. and Ph.D. degrees in Computer Science from Carnegie Mellon University, with a focus on artificial intelligence. Dr. Gil collaborates with scientists in many domains on intelligent workflows, semantic metadata capture, social knowledge collection, computer-mediated collaboration, and automated discovery. Her current focus is on using artificial intelligence for environmental resources, integrating climate, hydrology, agriculture, and socioeconomic models. She is a Fellow of the Association for Computing Machinery (ACM) and Past Chair of its Special Interest Group in Artificial Intelligence. She is also Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), and was elected as its 24th President in 2016.
AAAI-20 Debate
Moderator: Kevin Leyton-Brown (University of British Columbia, Canada)
Debaters will include luminaries from both industry and academia.
Monday, February 10, 6:15 – 7:15 PM
(Details)
AI History Panel: Advancing AI by Playing Games
Moderator: Amy Greenwald (Brown University)
Panelists: Murray Campbell (IBM), Michael Bowling (University of Alberta), Hiroaki Kitano (Sony), Garry Kasparov, and David Silver (Deepmind and University College London)
Tuesday, February 11, 4:45 – 6:15 PM
(Details)
AAAI-20 Invited Speakers
Susan Athey (Stanford University, USA)
Tuesday, February 11, 3:50 – 4:40 PM
Aude Billard (EPFL – Ecole Polytechnique Federale de Lausanne, Switzerland)
Monday, February 10, 8:30 – 9:20 AM
Stuart Russell (University of California, Berkeley, USA)
Wednesday, February 12, 8:50 – 9:50 AM
IAAI-20 Speakers
Robert S. Engelmore Memorial Award Lecture
Henry Kautz (University of Rochester)
Monday, February 10, 5:20 – 6:10 PM
IAAI/AAAI Joint Invited Talk
Dawn Song (UC Berkeley)
Tuesday, February 11, 8:30 – 9:20 AM
IAAI-20 Invited Speaker
David Cox (MIT-IBM Watson AI Lab)
Monday, February 10, 11:15 am – 12:15 pm
EAAI-20 Speakers
EAAI Outstanding Educator Award Lecture
Marie desJardins (Simmons University, USA)
Saturday, February 8, 9:05 – 9:55 am
Ben Shapiro and Abigail Zimmermann-Niefield (University of Colorado Boulder, USA)
Sunday, February 9, 9:40 – 10:30 am
Susan Athey
Stanford University
AAAI 2020 Invited Talk
Talk Title: The Economic Value of Data for Targeted Pricing
Abstract: This presentation reviews recent research about consumer choices in shopping, for example, in supermarkets. Historically a large amount of literature in economics and marketing studied consumer choices among brands, considering one product category at a time. A series of recent papers make use of advances in computation and techniques from matrix factorization to study consumer responses to price changes using observational data from consumer transactions. One question that arises is the value of data (for example, the increase in profit from using additional data), comparing the value of different types of data, e.g., more consumers or longer retention for each consumer.
Susan Athey is the Economics of Technology Professor at Stanford Graduate School of Business. She received her BA from Duke University and her Ph.D. from Stanford. She previously taught at MIT and Harvard. Her research focuses on the economics of digitization, marketplace design, and machine learning. She previously served as consulting chief economist for Microsoft and now serves on the boards of Expedia, Lending Club, Rover, Turo, Ripple, and Innovations for Poverty Action. She is the director of the Golub Capital Social Impact Lab at Stanford GSB and associate director of the Stanford Institute for Human-Centered Artificial Intelligence.
Yoshua Bengio
Mila (Quebec AI Institute)
Talk Title: Deep Learning for AI
Abstract: Artificial Intelligence (AI) research has been transformed in fundamental ways by the development and success of deep learning. Whereas symbolic approaches to AI focused on human-provided formal knowledge presented as logical rules and facts, much of what humans know is not accessible to them consciously and is thus difficult to communicate with computers. Machine learning bypasses this problem by allowing the computer to acquire that knowledge from data, observations, and interactions with an environment. Neural networks and deep learning are machine learning methods inspired by the brain in which information is not represented by symbolic statements but instead where concepts have distributed representations, patterns of activations of features which can overlap across concepts, making it possible to quickly generalize to new concepts. When we make it possible to compose modules which process such distributed representations (either recursively or through layers of processing) it is possible to represent very rich functions compactly and obtain even better generalization. More recently, deep learning has gone beyond its traditional realm of pattern recognition over vectors or images and expanded into many self-supervised methods and generative models able to capture complex multi-modal distributions into models with attention which can process graphs and sets, leading to breakthroughs in speech recognition and synthesis, computer vision and machine translation, for example. The talk closes with a discussion of current limitations and forward-looking research directions toward human-level AI.
Yoshua Bengio is recognized as one of the world’s artificial intelligence leaders and a pioneer of deep learning. Professor since 1993 at the Université de Montréal, he received the A.M. Turing Award 2018, considered like the Nobel prize for computing, with Geoff Hinton and Yann LeCun. Holder of the Canada Research Chair in Statistical Learning Algorithms, he is also the founder and scientific director of Mila, the Quebec Institute of AI–the world’s biggest university-based research group in deep learning. In 2018, he collected the largest number of new citations in the world for a computer scientist and earned the prestigious Killam Prize from the Canada Council for the Arts. Concerned about the social impact of AI, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.
Aude Billard
EPFL – Ecole Polytechnique Federale de Lausanne, Switzerland
Talk Title: Combining Machine Learning and Control for Reactive Robots
Abstract: Robots have gotten out of the secure and predictable environment of factories and have started to face the complexity and unpredictability of our daily environments. To avoid that, robots fail lamely at the task they are programmed to do, robots now need to adapt on the go. I will present techniques from machine learning to allow robots to learn strategies to enable them to react rapidly and efficiently to changes in the environment and applications of these techniques for rapid and robust manipulation of objects.
Aude Billard is a full professor and head of the LASA laboratory at the School of Engineering at the Swiss Institute of Technology Lausanne (EPFL). She was a faculty member at the University of Southern California prior to joining EPFL in 2003. She holds a B.Sc and M.Sc. in Physics from EPFL (1995) and a Ph.D. in Artificial Intelligence (1998) from the University of Edinburgh. Her research spans the fields of machine learning and robotics, with a particular emphasis on learning from sparse data and performing fast and robust retrieval. Her work finds application in robotics, human-robot / human-computer interaction, and computational neuroscience. This research received best paper awards from several venues, among which IEEE Transactions on Robotics, RSS, ICRA, and IROS.
David Cox
MIT-IBM Watson AI Lab
David Cox is the IBM Director of the MIT-IBM Watson AI Lab, a first-of-its-kind industry-academic collaboration between IBM and MIT, focused on fundamental research in artificial intelligence. The Lab was founded with a $240m, 10-year commitment from IBM and brings together researchers at IBM with faculty at MIT to tackle hard problems at the vanguard of AI. Prior to joining IBM, David was the John L. Loeb Associate Professor of the Natural Sciences and of Engineering and Applied Sciences at Harvard University, where he held appointments in Computer Science, the Department of Molecular and Cellular Biology, and the Center for Brain Science.
Marie desJardins
Simmons University, USA
EAAI Outstanding Educator Award Lecture
Talk Title: #AIForAll: A 64-Year Perspective on AI, Computing, Inclusion, and Diversity
Abstract: As the AI community prepares to celebrate the 2^8 anniversary of the Dartmouth Summer Research Project on Artificial Intelligence that launched AI as a field, it is an appropriate time to look back over the last 64 years to consider how far we have progressed. This presentation will focus particularly on trends in education, diversity, and inclusion in AI and in computing more generally. The talk will also include recommendations for the field, including an increased emphasis on ethical computing, best practices for inclusive classrooms and work environments, and how to be an effective ally for underrepresented groups.
Marie desJardins is the Dean of the College of Organizational, Computational, and Information Sciences at Simmons University in Boston. She was previously a professor at the University of Maryland, Baltimore County, where she was a UMBC Presidential Teaching Professor, Academic Innovation Fellow, Honors Faculty Fellow, and Associate Dean of UMBC’s College of Engineering and Information Technology. She is a AAAI Fellow, an ACM Distinguished Member, and the recipient of the A. Richard Newton Educator ABIE Award, the UC Berkeley Distinguished Alumni Award in Computer Science, and mentoring awards from CRA-E and NCWIT. Dr. desJardins is known for her research in artificial intelligence, her work in expanding access to K-12 computer science education, and her leadership as a mentor, teacher, and champion for diversity in computing. While at UMBC, she advised 12 Ph.D. students, 26 M.S. students, and over 100 undergraduate researchers.
Geoffrey Hinton
Google and The Vector Institute
Talk Title: Stacked Capsule Autoencoders
Abstract: An object can be seen as a geometrically organized set of interrelated parts. A system that makes explicit use of these geometric relationships to recognize objects should be naturally robust to changes in viewpoint because the intrinsic geometric relationships are viewpoint-invariant. We describe an unsupervised version of capsule networks, in which a neural encoder, which looks at all of the parts, is used to infer the presence and poses of object capsules. The encoder is trained by back-propagating through a decoder, which predicts the pose of each already discovered part using a mixture of pose predictions. The parts are discovered directly from an image, in a similar manner, by using a neural encoder, which infers parts and their affine transformations. We learn object- and part-capsules on unlabeled data and then cluster the vectors of presence of object capsules. When told the names of these clusters, we achieve state-of-the-art results for unsupervised classification on MNIST.
Geoffrey Hinton received his Ph.D. in Artificial Intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie Mellon, he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto, where he is now an Emeritus Distinguished Professor. He is also a Vice President & Engineering Fellow at Google and Chief Scientific Adviser of the Vector Institute. Geoffrey Hinton was one of the researchers who introduced the backpropagation algorithm and was the first to use back propagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification.
Henry Kautz
University of Rochester
Talk Title: The Third AI Summer
Abstract: The first AI summer was based on optimism about the power of general power solving, and the second on the power of knowledge engineering. Advances in machine learning have brought us into the third AI summer. This time, however, the stakes are incalculably higher than in the past. The danger is not just, as before, that marketplace hype and an overly narrow scientific focus will lead to disillusionment and retrenchment, but rather that AI now works well enough that it can be used – and is already being used – to eliminate human freedom and dignity. A dystopian future is not inevitable; progress in AI might instead usher in an era of unprecedented prosperity, knowledge, and freedom. This talk will explore the scientific, social, and geopolitical forces at play in the third AI summer.
Henry Kautz is currently serving as director for the division for Information & Intelligent Systems at the National Science Foundation. He is a professor of computer science and founding director of the Goergen Institute for Data Science at the University of Rochester. He has been a researcher at AT&T Bell Labs in Murray Hill, NJ, and a full professor at the University of Washington, Seattle. In 2010, he was elected President of AAAI, and in 2016 was elected Chair of the AAAS Section on Information, Computing, and Communication. His interdisciplinary research includes practical algorithms for solving worst-case intractable problems in logical and probabilistic reasoning; models for inferring human behavior from sensor data; pervasive healthcare applications of AI; and social media analytics. In 1989 he received the IJCAI Computers & Thought Award, which recognizes outstanding young scientists in artificial intelligence, and 30 years later received the 2018 ACM-AAAI Allen Newell Award for career contributions that have breadth within computer science and that bridge computer science and other disciplines.
Yann LeCun
Facebook AI Research & New York University
Talk Title: Self-Supervised Learning
Abstract: Almost all the recent progress in computer perception, speech recognition, and NLP has been built around supervised deep learning, in which machines are required to predict human-provided annotations. Today, DL systems are at the core of search engines and social network content filtering and retrieval, medical image analysis, driving assistance, and many areas of science. But the best machine learning methods still require considerably more data or interaction with the environment than human and animal learning. How do we get machines to learn massive amounts of background knowledge about how the world works by observation in a task-independent way, like animals and humans? One promising avenue is self-supervised learning (SSL), where the machine predicts parts of its input from other parts of its input. SSL has already brought about great progress in discrete domains, such as language understanding. The question is how to use SSL for high-dimensional continuous domains such as audio, images, and video.
Yann LeCun is VP and Chief AI Scientist at Facebook and Silver Professor at NYU, affiliated with the Courant Institute and the Center for Data Science. He was the founding Director of Facebook AI Research and of the NYU Center for Data Science. He received an EE Diploma from ESIEE (Paris) in 1983 and a PhD in Computer Science from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996 and joined NYU in 2003 after a short tenure at the NEC Research Institute. In late 2013, LeCun became Director of AI Research at Facebook while remaining on the NYU Faculty part-time. He was visiting professor at Collège de France in 2016. His research interests include machine learning and artificial intelligence, with applications to computer vision, natural language understanding, robotics, and computational neuroscience. He is best known for his work in deep learning and the invention of the convolutional network method, which is widely used for image, video, and speech recognition. He is a member of the US National Academy of Engineering, a Chevalier de la Légion d’Honneur, a fellow of AAAI, the recipient of the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award, the 2016 Lovie Award for Lifetime Achievement, the University of Pennsylvania Pender Award, and honorary doctorates from IPN, Mexico, and EPFL. He is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.”
Stuart Russell
University of California, Berkeley
Talk Title: How Not to Destroy the World with AI
I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.
Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum’s Council on AI and Robotics. He is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the World Technology Award (Policy category), the Mitchell Prize of the American Statistical Association, the Feigenbaum Prize of the Association for the Advancement of Artificial Intelligence, and Outstanding Educator Awards from both ACM and AAAI. From 2012 to 2014, he held the Chaire Blaise Pascal in Paris, and he was awarded the Andrew Carnegie Fellowship for 2019 to 2021. He is an Honorary Fellow of Wadham College, Oxford; Distinguished Fellow of the Stanford Institute for Human-Centered AI; Associate Fellow of the Royal Institute for International Affairs (Chatham House); and Fellow of the Association for the Advancement of Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book “Artificial Intelligence: A Modern Approach” (with Peter Norvig) is the standard text in AI; it has been translated into 14 languages and is used in over 1400 universities in 128 countries. His research covers a wide range of topics in artificial intelligence, including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision-making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity. The latter topic is the subject of his new book, “Human Compatible: AI and the Problem of Control” (Viking/Penguin, 2019).
R. Benjamin Shapiro
University of Colorado Boulder
Talk Title: On Contemporaneous Computing Education: ML for K-12
Abstract: Computer science is a field of remarkable breadth, with problems in human-computer interaction alone spanning natural language processing, visual, audible, and tangible interfaces, accessible design, social computing, and art-making. Machine learning is now being applied in every one of these domains. Bruner claimed that “any subject can be taught effectively in some intellectually honest form to any child at any stage of development.” Computing education must take up this call, including offering developmentally-appropriate machine learning education. I will present a vision for how this could unfold, share progress on my team’s efforts to develop machine learning education for youth, and discuss ongoing challenges.
R. Benjamin Shapiro is an Assistant Professor of Computer Science at the University of Colorado Boulder. He is also faculty, by courtesy, in Learning Sciences & Human Development (School of Education) and the Department of Information Science (College of Media, Communication, and Information). His research group, the Laboratory for Playful Computation (LPC), investigates the design of experiences and technologies for young people to learn computer science through collaborative, creative expression and through their own design of interactive technologies to solve problems in their homes and communities.
Dawn Song
UC Berkeley
Talk Title: AI and Security: Lessons, Challenges, and Future Directions
Dawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. Her research interest lies in AI and deep learning, security, and privacy. She is the recipient of various awards, including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, and Best Paper Awards from top conferences in Computer Security and Deep Learning. She is an ACM Fellow and an IEEE Fellow. She is ranked the most cited scholar in computer security (AMiner Award). She obtained her Ph.D. degree from UC Berkeley. Prior to joining UC Berkeley as a faculty, she was a faculty at Carnegie Mellon University from 2002 to 2007. She is also a serial entrepreneur and has been named on the Female Founder 100 List by Inc. and Wired25 List of Innovators.
Abigail Zimmermann-Niefield
University of Colorado Boulder
Talk Title: On Contemporaneous Computing Education: ML for K-12
Abstract: Computer science is a field of remarkable breadth, with problems in human-computer interaction alone spanning natural language processing, visual, audible, and tangible interfaces, accessible design, social computing, and art-making. Machine learning is now being applied in every one of these domains. Bruner claimed that “any subject can be taught effectively in some intellectually honest form to any child at any stage of development.” Computing education must take up this call, including offering developmentally-appropriate machine learning education. I will present a vision for how this could unfold, share progress on my team’s efforts to develop machine learning education for youth, and discuss ongoing challenges.
Abigail Zimmermann-Niefield is a Ph.D. Student in Computer Science at the University of Colorado Boulder. She is co-advised by Ben Shapiro and Shaun Kane and is broadly interested in creativity, literacy, and agency in ML. Her research focuses on how people with little programming experience can learn about and apply Machine Learning by creating models of their own body movements. She draws on theories and methods from human-computer interaction and education. She has a B.A. in Mathematics and Computer Science from Williams College.
Oxford-Style Debate: Academic AI Research in an Age of Industry Labs
Monday, February 10, 6:15 – 7:15 PM
Grand Ballroom, 3rd floor
Proposition: Academic AI researchers should focus their attention on research problems that are not of immediate interest to industry.
Moderator: Kevin Leyton-Brown (University of British Columbia, Canada)
Debaters will include luminaries from both industry and academia.
This lighthearted and entertaining debate will examine the broad theme of how academic AI researchers should direct their efforts to have the most impact now that industry is investing huge amounts into in-house research efforts. Teams of two will argue each side as forcefully as they can (regardless of more nuanced positions the participants may hold) but will then conclude by seeking a middle ground and reflecting on strong arguments from the other side. Finally, we’ll hear from the audience.
AI History Panel: Advancing AI by Playing Games
Tuesday, February 11, 4:45 – 6:15 PM
Building AI to play games as well as human masters has been a goal of AI ever since Arthur Samuel’s seminal checkers program in 1959. Today’s game-playing programs have surpassed human performance in substantially harder games, including backgammon, chess, poker, and Go. Still, in multi-player games, especially those played by embodied agents such as soccer, significant AI challenges remain. This panel will include representatives of efforts to build machines that excel at playing these games, discussing the main ingredients of the technology they developed, the challenges they encountered, and how the agenda of building expert game-playing machines furthers progress on the real-world goals of AI.
Michael Bowling
University of Alberta
Michael Bowling is a professor at the University of Alberta, a Fellow of the Alberta Machine Intelligence Institute, and a senior scientist at DeepMind. Michael led the Computer Poker Research Group, which built some of the best poker-playing artificial intelligence programs in the world, including being the first to beat professional players at both limit and no-limit variants of the game. Michael also was behind the use of Atari 2600 games to evaluate the general competency of reinforcement learning algorithms, which is now a ubiquitous benchmark suite of domains for reinforcement learning.
Murray Campbell
IBM
Murray Campbell is a Distinguished Research Staff Member at the IBM T. J. Watson Research Center, where he is a manager in the IBM Research AI organization. He received his B.Sc. and M.Sc. in computing science from the University of Alberta and his Ph.D. in computer science from Carnegie Mellon University. He was a member of the IBM team that developed Deep Blue, which was the first computer to defeat the human world chess champion in a match. He received numerous awards for Deep Blue, including the Allen Newell Medal for Research Excellence and the Fredkin Prize. He is an ACM Distinguished Scientist and a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI).
Amy Greenwald
Brown University
Amy Greenwald is a Professor of Computer Science at Brown University in Providence, Rhode Island. Her research focus is on game-theoretic and economic interactions among computational agents, applied to areas like autonomous bidding in wireless spectrum auctions and ad exchanges. Before joining Brown, Greenwald was a postdoc at IBM’s T.J. Watson Research Center, where her “Shopbots and Pricebots” paper was named Best Paper at IBM Research. Her honors include the Presidential Early Career Award for Scientists and Engineers (PECASE), a Fulbright nomination, and a Sloan Fellowship. Finally, Greenwald is active in promoting diversity in Computer Science, leading multiple K-12 initiatives in which Brown undergraduates teach computer science to Providence public school students.
Garry Kasparov
Born in Baku, Azerbaijan, in the Soviet Union in 1963, Garry Kasparov came to fame at the age of 22 as the youngest world chess champion in history in 1985, retaining his top ranking for 20 years. His matches against the IBM super-computer Deep Blue in 1996-97 were key to bringing artificial intelligence, and chess, into the mainstream. His creation of Advanced Chess in 1998 led to his formulation of the importance of process in human-plus-machine collaboration. In 2012, Kasparov was named chairman of the New York-based Human Rights Foundation, succeeding Václav Havel. In 2016, he was named a Security Ambassador by Avast Software, where he discusses cybersecurity and the digital future, and to the executive board of the Foundation for Responsible Robotics. His latest book is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins.
Hiroaki Kitano
Sony
Dr. Hiroaki Kitano is President and CEO of Sony Computer Science Laboratories, Inc., Corporate Executive of Sony Corporation, a head of Sony AI, President of The Systems Biology Institute, and Professor at Okinawa Institute of Science and Technology Graduate University. He is also a Founding President of the RoboCup Federation, President of International Joint Conference on Artificial Intelligence (IJCAI) (2009-2011), and a Member of the AI & Robotics Council of World Economic Forum (2016-2018). He received The Computers and Thought Award from the International Joint Conference on Artificial Intelligence in 1993, Prix Ars Electronica 2000, Design Award 2001 from Japan Inter-Design Forum, and Nature Award for Creative Mentoring in Science (Mid Carrier) in 2009, as well as being an invited artist for Biennale di Venezia 2000 and Museum of Modern Art, New York in 2001.
David Silver
Deepmind and University College London
David Silver is a principal research scientist at DeepMind and a professor at University College London. David’s work focuses on artificially intelligent agents based on reinforcement learning. David co-led the project that combined deep learning and reinforcement learning to play Atari games directly from pixels (Nature 2015). He also led the AlphaGo project, culminating in the first program to defeat a top professional player in the full-size game of Go (Nature 2016), and the AlphaZero project, which learned by itself to defeat the world’s strongest chess, shogi, and Go programs (Nature 2017, Science 2018). Most recently, he co-led the AlphaStar project, which led to the world’s first grandmaster-level StarCraft player (Nature 2019). His work has been recognised by the Marvin Minsky Award, the Mensa Foundation Prize, and the Royal Academy of Engineering Silver Medal.