December 2, 2016
Speaker: Alfred Z. Spector, Two Sigma Investments
Over the last few decades, empiricism has become the third leg of computer science, adding to the field’s traditional bases in mathematical analysis and engineering. This shift has occurred due to the sheer growth in the scale of computation, networking and usage as well as progress in machine learning and related technologies. Resulting data-driven approaches have led to extremely powerful prediction and optimization techniques and hold great promise, even in the humanities and social sciences. However, no new technology arrives without complications: In this presentation, I will balance the opportunities provided by big data and associated A.I. approaches with a discussion of the various challenges. I’ll enumerate ten categories including those which are technical (e.g., resilience and complexity), societal (e.g., difficulties in setting objective functions or understanding causation), and humanist (e.g., issues relating to free-will or privacy). I’ll provide many example problems, and make suggestions on how to address some of the unanticipated consequences of Big Data.
November 18, 2016
Speaker: Scott Kuindersma, Harvard University
Despite the existence of incredibly capable robot hardware, the limitations of our best planning and control algorithms have prevented us from unleashing these machines in critical exploration, automation, and disaster response applications. Many key behaviors, including locomotion and manipulation, involve robots making intermittent frictional contact with their environments. This simple fact has significant computational ramifications, often leading to challenging mixed-integer or nonlinear complementarity problems. This talk will summarize the Harvard Agile Robotics Lab's research on designing optimization algorithms that improve our ability to plan and control contact-rich motions with humanoid robots.
November 11, 2016
Random Sampling: From Surveys to Big Data
Speaker: Edith Cohen, Research scientist at Google (Mountain View); Visiting Professor at the School of Computer Science at Tel Aviv University in Israel
Random sampling is a classic tool for surveying properties and statistics of populations: Samples capture the essence of the data so that properties of the data can be approximated by estimators applied to the sample. Sampling schemes are tailored to the tasks at hand, and seeks to balance size, approximation quality, and computation.
Historically, sampling is as old as human learning. Some landmarks are its use by Laplace (1802) to estimate the population of France, and first use by the US census (1938) to estimate unemployment rate. With the emergence of massive data sets, sampling became an essential tool for scaling up computation (numerical optimization, clustering, submodular maximization) and leveraging data such as traffic or activity logs that are too large to process or store longer term.
In this talk, Dr. Cohen will highlight some favourite selected applications and sampling schemes. In particular, samples as locality-sensitive hashes, multi-objective samples, and sampling of streamed or distributed data.
November 4, 2016
Making Data Matter: Visualization As Communication Medium
Speakers: Fernanda Viegas and Martin Wattenberg, Google
Data is ubiquitous in our lives. It describes our neighborhoods, our cities, weather patterns, it helps track illnesses and contextualize social patterns. In an increasingly data-rich society, there’s a critical need for tools to help people understand and reason about complex information. Our research seeks to make data visualization accessible to everyone: from lay users to data experts. We will present work that exposes kids to complex data, explores the artistic expressiveness of data, uncovers the underworld of cyber crime and augments our knowledge of scientific fields such as machine learning. This approach to visualization as an inclusive communication medium points the way to a future where every citizen can more fully participate in a data-driven society.
October 21, 2016
Socially Assistive Robotics: Creating Robots That Care
Speakers: Maja Mataric, University of Southern California
Socially assistive robotics (SAR) is a new field of intelligent robotics that focuses on developing machines capable of assisting users through social rather than physical interaction. The robot’s physical embodiment is at the heart of SAR’s effectiveness, as it hinges on the inherently human tendency to engage with lifelike (but not necessarily human-like or otherwise biomimetic) agents. People readily ascribe intention, personality, and emotion to robots; SAR leverages this engagement stemming from non-contact social interaction involving speech, gesture, movement demonstration and imitation, and encouragement, to develop robots capable of monitoring, motivating, and sustaining user activities and improving human learning, training, performance and health outcomes. Human-robot interaction (HRI) for SAR is a growing multifaceted research area at the intersection of engineering, health sciences, neuroscience, social, and cognitive sciences. This talk will describe our research into embodiment, modeling and steering social dynamics, and long-term user adaptation for SAR. The research will be grounded in projects involving analysis of multi-modal activity data, modeling personality and engagement, formalizing social use of space and non-verbal communication, and personalizing the interaction with the user over a period of months, among others. The presented methods and algorithms will be validated on implemented SAR systems evaluated by human subject cohorts from a variety of user populations, including stroke patients, children with autism spectrum disorder, and elderly with Alzheimer's and other forms of dementia.
October 7, 2016
Speaker: David G. Stork, Rambus Labs
The central insight underlying the field of computational sensing and imaging is that the joint design of optics and signal processing to yield a final digital image or estimate of some property of the scene can relax the traditional constraints on optical elements need to make an optical image that "looks good." In our lensless imagers, binary diffraction gratings with special mathematical properties yield blurry, blob-like optical images that nevertheless contain sufficient information that a digital image of the scene can be computed.
September 23, 2016
Speaker: Koji Tsuda, Professor, Department of Computational Biology and Medical Sciences Graduate School of Frontier Sciences, The University of Tokyo
Material discovery driven by machine learning is a reality. I report successful case studies in discovery of low LTC compounds from database, grain boundary optimization and automated design of Si-Ge superlattices.
September 9, 2016
Speaker: Alex Wissner-Gross, President and Chief Scientist of Gemedy
What is the critical path to achieving artificial superintelligence? This talk will explore the computational science and engineering issues associated with defining intelligence, the role of large datasets in accelerating AI breakthroughs, and strategies for detecting and managing the emergence of superhuman AI.