IACS Seminars

The IACS seminar series is a forum for thought leaders from academia, industry, and government to share their research on innovative computational and data science topics and methodologies. Past topics include smart city design, data science for social good, data privacy and security, socially assistive robotics, big data software, machine learning for small business lending, and AI technology development, and data-driven algorithmics.

April 12, 2019

Neural Modeling and Differential Equations

Speaker: Isaac Lagaris, Professor of Computer Science and Engineering, University of Ioannina

The universal approximation capability of neural networks is exploited to recover solutions of Differential Equations.  The process of solving a Differential Equation is reduced to that of training a Neural Form.  Boundary conditions may be satisfied either by proper construction of the neural form, or alternatively, by treating them as constraints.

March 29, 2019

Forecasting Airport Transfer Passenger Flow Using Real-Time Data and Machine

Speaker: Yael Grushka-Cockayne, Associate Professor, Harvard Business School

In collaboration with Heathrow airport, we develop a predictive system that generates quantile forecasts of transfer passengers’ connection times. Sampling from the distribution of individual passengers’ connection times, the system also produces quantile forecasts for the number of passengers arriving at the immigration and security areas. Airports and airlines have been challenged to improve decision-making by producing accurate forecasts in real time. Our work is the first to apply machine learning for predicting real-time quantile forecasts in the airport. We focus on passengers’ connecting journeys, which have only been studied by few researchers. Better forecasts of these journeys can help optimize passenger experience and improve airport resource deployment. The predictive model developed is based on a regression tree combined with copula-based simulations. We generalize the tree method to predict complete distributions, moving beyond point forecasts. To derive insights from the tree, we introduce the concept of a stable tree that can be summarized by its key variables’ splits. We identify seven key factors that impact passengers’ connection times, dividing passengers into 16 passenger segments. We find that adding correlations among the connection times of passengers arriving on the same flight can improve the forecasts of arrivals at the immigration and security areas. When com- pared to several benchmarks, our model is shown to be more accurate in both point forecasting and quantile forecasting. Our predictive system can produce accurate forecasts, frequently, and in real- time. With these forecasts, an airport’s operating team can make data-driven decisions, identify late connecting passengers and assist them to make their connections. The airport can also update its resourcing plans based on the prediction of passenger arrivals. Our approach can be generalized to other domains, such as rail or hospital passenger flow.

March 15, 2019

The Dedalus Project: A Flexible Approach to Accurately Solving PDEs, with Applications in Stellar Astrophysics

Speaker: Benjamin Brown, Assistant Professor, University of Colorado

Advances in theoretical astrophysics are powered by computational tools.  Here we discuss the Dedalus project, a flexible, open-source and spectrally accurate framework for solving partial differential equations.  In Dedalus, equations are separated from solution techniques, allowing rapid comparison of different approximations within a consistent numerical framework.  We discuss recent advances in representing spherical geometries and the sparse solution of non-constant coefficient systems, with illustrations drawn from our team's work on stellar astrophysics, convection and magnetic dynamo processes.  Further details are at http://dedalus-project.org/.

February 15, 2019

Predictive Modeling of Aperiodic Astrophysical Behavior

Speaker: Matthew Graham, Research Professor of Astrophysics, CalTech

The majority of variable astronomical sources are aperiodic but represent a wide range of physical processes and scales. They can play a key role in our understanding of complex dynamic physical environments from stellar photospheres to accretion disks to merging galactic systems. However, they remain poorly studied in comparison to periodic sources, partly due to a lack of suitable statistical tools and methodologies. The best known aperiodic classes are quasars and young stellar objects (YSOs) but in both cases fundamental questions remain about the physical mechanisms behind their optical variability. A new generation of sky surveys is enabling systematic studies of astrophysical variability and discovering as many new phenomena as it seeks to explain: in quasars, we have discovered sub-parsec separated binaries, major multi-year long flares attributable to microlensing and explosive stellar-related activity in the accretion disk, and changing-state sources indicative of thermal fronts propagating through the accretion disk. In this talk, Professor Graham will discuss new approaches to characterize aperiodic variability using generative data-derived models and predict the future behavior of aperiodic sources. This allows them to be monitored in real-time with new synoptic facilities thus providing a more powerful way to detect unexpected behavior than differential photometry.

November 30, 2018

Machine Learning for Materials Discovery

Speaker: Julia Ling, Director of Data Science, Citrine Informatics

Materials science presents a unique set of challenges and opportunities for machine learning methods in terms of data size, data sparsity, available domain knowledge, and multi-scale physics.  In this talk, Dr. Ling will discuss how machine learning can be used to accelerate materials discovery through a sequential learning workflow.  You'll examine how domain knowledge can be integrated into data-driven models, the role of uncertainty quantification in driving exploration of new design candidates, and how to forecast the impact of a data-driven approach on a given materials discovery campaign.

November 9, 2018

Bottlenecks, Representations, and Fairness: Information-Theoretic Tools For Machine Learning

Speaker: Flavio P. Calmon, Assistant Professor of Electrical Engineering, Harvard University

Information theory can shed light on the algorithm-independent limits of learning from data and serve as a design driver for new machine learning algorithms. In this talk, Dr. Calmon will discuss a set of flexible information-theoretic tools that can be used to (i) understand fairness and discrimination by machine learning models and (ii) characterize data representations learned by complex learning models. He will illustrate these techniques in both synthetic and real-world datasets, and discuss future research directions.

November 2, 2018

Machine Learning in the Healthcare Enterprise

Speaker: Mark H. Michalski, Executive Director of the MGH & BWH Center for Clinical Data Science

Machine learning is an emerging technology with promise to impact a wide variety of areas throughout the healthcare enterprise. In this discussion, Dr. Michalski will review advances in machine learning and their potential impact on several areas of healthcare, with special focus in diagnostic areas. In addition, he’ll discuss some of the challenges and approaches that have been taken to translate this technology at the Partners organization.

October 26, 2018

Data Science for Game Development

Speaker: Dean Wyatte, Lead Data Scientist, Activision

Online games are capable of generating vast amounts of data ranging from aggregate player behavior to low-level instrumentation from the game engine and back end services. Modern games are also designed from large amounts of data -- think textures, models, and photogrammetry; animation, physical systems, and motion capture. This talk will describe the role of data science in supporting these multiple stages of game development. Come learn about some of the specific challenges of making games at Activision and the data-driven solutions that the Activision team has built.

October 19, 2018

Computational Perception with Applications to Graphic Design

Speaker: Zoya Bylinskii, Research Scientist at Adobe Research & MIT Postdoctoral Affiliate

What makes an image memorable? Which parts of a display or interface capture attention? How can a visualization be designed to be impactful and educational? At the core of computational perception, Zoya's work focuses on understanding human memory and attention, using computational approaches (e.g., information theoretic models, deep learning, etc.) for modeling, and, coming full circle, using the findings about human perception to improve user interfaces. During her talk, Zoya, will demonstrate applications of this work to interactive design tools and automatic graphic design summarization, and talk about the future of A.I. for creativity.

September 28, 2018

Fluid Mechanics with Turbulence, Reduced Models, and Machine Learning

Speaker: David Sondak, Lecturer in Computation, Institute for Applied Computational Science, Harvard University

Fluids are everywhere. As humans, we are constantly surrounded by them, including the air we breathe, the blood in our bodies, the water in the oceans, and the solar wind bombarding the Earth. Indeed, fluids impact every area of science from the biological to the geophysical and astrophysical. Understanding and controlling fluid behavior has an immense impact on human society from more eective drug delivery techniques through more ecient energy harvesting technologies. However, the desire to understand and control fluid behavior gives rise to signicant mathematical challenges in the form of multiscale behavior, complex geometries, and complex fluids.

Dr. Sondak will begin his talk with an introduction to fluid mechanics and why it is an important field of study. He will then motivate numerical methods and multiscale phenomena before giving an overview of turbulence. He will discuss reduced order models from the perspective of turbulence, including turbulence modeling. Dr. Sondak will utilize the last portion of his talk to present recent progress on using machine learning to develop and improve turbulence models.

September 14, 2018

Learning to Rank an Assortment of Products

Speaker: Kris Ferreira, Assistant Professor of Business Administration in the Technology and Operations Management (TOM) Unit, Harvard Business School

This talk will highlight the joint work of Kris Johnson Ferreira, Assistant Professor of Business Administration in the TOM Unit at HBS, and Shreyas Sekar, postdoc at the Laboratory for Innovation Science at Harvard.

Kris Johnson Ferreira and Shreyas Sekar's research considers the product ranking challenge that online retailers face when their customers typically do not have a good idea of the product assortment offered. Customers form an impression of the assortment after looking only at products ranked in the initial positions, and then decide whether they want to continue browsing all products or leave the site. Ferreira and Sekar propose to resolve this challenge with a class of online algorithms that prescribe a ranking to show each customer with the goal of maximizing customer engagement. Over time, the algorithm learns about customer interest/engagement via clicks and uses this information to inform rankings offered to subsequent customers. Kris will prove that their algorithm converges to the best known ranking for the full-information setting, and share simulation results that highlight its performance on data from a large online retailer.

February 28, 2020

What Do Models of Natural Language "Understanding" Actually Understand?

Speaker: Ellie Pavlick, Brown University

Natural language processing has become indisputably good over the past few years. We can perform retrieval and question answering with purported super-human accuracy, and can generate full documents of text that seem good enough to pass the Turing test. In light of these successes, it is tempting to attribute the empirical performance to a deeper "understanding" of language that the models have acquired. Measuring natural language "understanding", however, is itself an unsolved research problem. In this talk, Dr. Pavlick argues that we have made real, substantive progress on modeling the _form_ of natural language, but have failed almost entirely to capture the underlying _meaning_. She'll discuss recent work which attempts to illuminate what it is that state-of-the-art models of language are capturing by inspecting the models' internal structure directly and by measuring their inferential behavior. Finally, Dr. Pavlick will conclude with results on the ambiguity of humans' linguistic inferences to highlight the challenges involved with developing prescriptivist language tasks for evaluating models of semantics.

February 21, 2020

Hardware and Software Co-design: Preparing and Optimizing Scientific Applications for Exascale Computing

Speaker: Nicholas Malaya, Advanced Micro Devices

The largest computers in the world are an essential tool for key scientific simulations, spanning a wide range of applications across fundamental science, in areas such as cosmology or turbulent flows, and applied engineering, such as material science and storm surge modeling. Historically, Moore's law has driven rapid expansion of computational capability, with the largest computers following an exponential growth in FLOPs. However, Moore's law is slowing, and high performance computing is undergoing a radical transformation from largely homogeneous clusters of CPUs to heterogeneous machines with specialized accelerators, particularly GPUs. This paradigm shift requires that algorithms and software evolve to leverage the specialized hardware in these systems. In this talk, Dr. Malaya, will discuss hardware and software 'co-design', and the essential role computational science plays as a bridge between physical subject matter experts and hardware design. This challenge is motivated by Oak Ridge National Laboratories upcoming Exascale supercomputer, Frontier, a heterogeneous system of AMD CPUs and GPUs. Coming online in 2021, Frontier is expected to be the largest supercomputer ever constructed. Ensuring application readiness from Day-0 for a massively parallel, heterogeneous machine is a challenging task. Application preparation is being driven via the Frontier Center for Accelerated Application Readiness (CAAR) program. The Frontier CAAR is a partnership between application core developers, vendor partners, and OLCF staff members to optimize simulation, data-intensive, and machine learning scientific applications for exascale performance, ensuring that Frontier will be able to perform large-scale science when it opens to users in 2022.

February 7, 2020

Uncertainty Quantification in Machine Learning

Speaker: Lalitha Venkataramanan, Schlumberger   

Deep learning techniques have been shown to be extremely effective for various classification and regression problems, but quantifying the uncertainty of their predictions and separating them into the epistemic and aleatoric fractions is still considered challenging. In subsurface characterization projects, tools consisting of seismic, sonic, magnetic resonance, resistivity, dielectric and/or nuclear sensors are sent downhole through boreholes to probe the earth’s rock and fluid properties. The measurements from these tools are used to build reservoir models that are subsequently used for estimation and optimization of hydrocarbon production. Machine learning algorithms are often used to estimate the rock and fluid properties from the measured downhole data. Quantifying uncertainties of these properties is crucial for rock and fluid evaluation and subsequent reservoir optimization and production decisions. These machine learning algorithms are often trained on a ‘ground-truth’ or core database.

During the inference phase which involves application of these algorithms to field data, it is critical that the machine learning algorithm flag data as ‘out of distribution’ from new geologies that the model was not trained upon. It is also highly important to be sensitive to heteroscedastic aleatoric noise in the feature space arising from the combination of tool and geological conditions. Understanding the source of the uncertainty and reducing them is key to designing intelligent tools and applications such as automated log interpretation answer products for exploration and field development. In this presentation, Dr. Lalitha Venkataramanan will discuss a few methods researchers have used in uncertainty quantification.

November 15, 2019

The Universe in a Stream: Challenges and Progress of the ALeRCE Broker

Speaker: Francisco Förster Burón, Universidad de Chile

To effectively connect the astronomical follow up infrastructure with a new generation of large etendue survey telescopes, such as ZTF or LSST, there is a need for a new type of instrument: the astronomical alert brokers. In this talk, Dr. Francisco Förster Burón will review the challenges and progress of building one of these systems: the Automatic Learning for the Rapid Classification of Events (ALeRCE) astronomical alert broker. ALeRCE (http://alerce.science/) is a new alert annotation and classification system led by an interdisciplinary and interinstitutional group of scientists from Chile and the US. ALeRCE is focused around three scientific cases: transients, variable stars and active galactic nuclei. Additionally, Dr. Burón will discuss some of the challenges associated with the problem of alert classification, including the ingestion, annotation, database management, training set building, distributed processing, machine learning classification and visualization, or the challenges of working in large interdisciplinary teams. He will show some results based on the real‐time ingestion and classification using the ZTF alert stream as input, as well as some of the tools available.

November 1, 2019

Investigating Wall-Bounded Turbulence in Direct Numerical Simulations

Speaker: Robert Moser, University of Texas

Wall-bounded turbulence has been of great concern at least since it's description as a fluid dynamic phenomenon by Osborne Reynolds in 1883. The reason, of course, is that in such turbulent wall-bounded flows, the turbulence is responsible for transport of momentum and heat from the bulk flow to the wall. It has long been recognized that wall-bounded shear flows at high Reynolds number are characterized by a thin viscous dominated inner layer at the wall and a thick inertially dominated outer layer away from the wall, with a matching region in between known as the log layer. However, the dynamics of the interaction between the inner and outer layers is not well understood. In this talk, Dr. Moser will address this shortcoming using data from direct numerical simulations (DNS) of turbulent channel flow.

Several years ago, Dr. Moser’s research team completed a DNS of a channel flow at high enough Reynolds number for it to exhibit a significant separation of scales between the inner and outer layer turbulence. This high Reynolds number simulation allows, for the first time, the investigation of the the interaction between inner and outer layers using DNS. The data from this simulation has been used to evaluate the spectrum (in horizontal directions) of the terms in the evolution equation for the Reynolds stress. This data has revealed several remarkable features of wall-bounded turbulence in the log layer and of the interaction between inner and outer layer turbulence. In particular, it is found that turbulence production in the log and outer layers dominantly occurs in modes that are extremely elongated in the streamwise direction with spanwise length scales greater than 1000 wall units. The generation of energy in these large-scale elongated modes results in an intricate set of energy exchanges, including transport from the outer layer into the inner layer in these same large scale elongated modes, resulting in modulation of the near-wall autonomous dynamics of the inner layer. This and other features of the wall turbulence dynamics will be discussed, as will their implications for large eddy simulation.

October 25, 2019

Learning from Watching: Applying Data Science to Entertainment

Speaker: Nathan Sanders, WarnerMedia Applied Analytics

The modern entertainment industry offers a dynamic environment for applying data science. It involves the study of human behavior at scale and the quantitative modeling of subjective experience and preference, all within a rapidly evolving domain encountering disruption from a myriad of new technologies and business models. Film, TV, and game production, distribution, branding, and marketing are all in the midst of transformative change prompted by these market forces as well as new applications of statistical techniques from fields ranging from natural language processing to computer vision to representation learning. In this talk, Nathan Sanders will provide a perspective on the unique role of data science in the entertainment industry based on his experience at WarnerMedia Applied Analytics. He will discuss how they have sought to maximize the impact of data science by designing models that enable both human and machine learning from data, and through effective communication with stakeholders throughout the business, while examining problems such as how to influence the impact of a blockbuster film’s marketing campaign and how to infer the thematic composition of content like TV shows.

October 18, 2019

On the Detection of Malware on Virtual Assistants Based on Behavioral Anomalies

Speaker: Spiros Mancoridis, Drexel University

Dr. Spiros Mancoridis's work explores security concerns pertaining to running software similar to Amazon Alexa home assistant on IoT-like platforms. He and his colleagues implemented a behavioral-based malware detector and compared the effectiveness of different system attributes that are used in detecting malware, i.e., system calls, network traffic, and the integration of system call and network traffic features. Given the small number of malware samples for IoT devices, they created a parameterizable malware sample that mimics Alexa behavior in varying degrees, while exfiltrating data from the device to a remote host. The performance of the anomaly detector was evaluated based on how well it determined the presence of the parameterized malware on an Alexa-enabled IoT device.

October 4, 2019

Multimodal Mapping of Brain Activity

Speaker: Jia Liu, Harvard SEAS

Real-time recording of brain-wide cellular activities with single-cell and single-spike spatiotemporal resolution, and cell-type specificity is important to understand brain functions and malfunctions, which requires an in situ multimodal mapping capability at single-cell resolution. In this talk, Professor Liu will, first, introduce his group's efforts to design bioelectronic sensors for multichannel, brain-wide, chronically stable, single-unit electrophysiology. Second, he will introduce their efforts to enable gene expression and cellular connectivity mapping with a simultaneous sensor position registration. Last, he will discuss his lab's current experimental and computational research efforts which aim to correlate these multimodal tools into one platform for brain activity mapping.

September 20, 2019

Bayesian Machine Learning Models for Understanding Microbiome Dynamics

Speaker: Georg Geber, Harvard Medical School

The human microbiome is highly dynamic on multiple timescales, changing dramatically during development of the gut in childhood, with diet, or due to medical interventions. In this talk, Dr. Gerber will present several Bayesian machine learning methods developed for gaining insight into microbiome dynamics. The first, MC-TIMME (Microbial Counts Trajectories Infinite Mixture Model Engine), is a non-parametric Bayesian model for clustering microbiome time-series data that we have applied to gain insights into the temporal response of human and animal microbiota to antibiotics, infectious, and dietary perturbations. The second, MDSINE (Microbial Dynamical Systems INference Engine), is a method for efficiently inferring dynamical systems models from microbiome time-series data and predicting temporal behaviors of the microbiota, which have been applied to developing bacteriotherapies for C. difficile infection and inflammatory bowel disease. The third, Microbiome Interpretable Temporal Rule Engine (MITRE), is a method for predicting host status from microbiome time-series data, which achieves high accuracy while maintaining interpretability by learning predictive rules over automatically inferred time-periods and phylogenetically related microbes.

September 13, 2019

AI for Social Good: Learning and Planning in the End-to-End, Data-to-Deployment Pipeline

Speaker: Milind Tambe, Harvard SEAS

With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. Professor Tambe's talk will focus on the problems of public safety and security, wildlife conservation and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to strategically deploy our limited intervention resources in these problem domains. He will discuss the importance of conducting this research via building the full data to field deployment end-to-end pipeline rather than just building machine learning or planning components in isolation. Results from deployments from around the world show concrete improvements over the state of the art. In pushing this research agenda, Dr. Tambe believes AI can indeed play an important role in fighting social injustice and improving society.

 

April 30, 2021

Hard NPL Tasks: Determining who is who and what is what

Speaker: Chris Tanner, Harvard University

While natural language processing (NLP) has experienced enormous progress in recent years, some tasks remain incredibly challenging. Namely, Coreference Resolution is a fundamental, unsolved task that attempts to resolve which words in a body of a text all refer to the same underlying "thing" (e.g., entity or event). This serves as an essential component of many other core NLP tasks, including information extraction, question-answering, document summarization, etc. However, decades of research have primarily focused on resolving entities (e.g., people, locations, organizations), with significantly less attention given to events -- the actions performed. In our work, we developed a state-of-the-art model for event coreference that uses almost no features. Last, we touch on remaining challenges and future directions.

April 23, 2021

The Enduring Impact of NYC's Stop, Question & Frisk Program. Lessons from "Big Data."

Speaker: Joscha Legewie, Harvard University

A growing effort among social scientists assesses the social consequences and costs of law enforcement activity for the health and education of minorities. Using large-scale administrative data, this talk presents four key research findings from my ongoing work on this topic: First, disparities in exposure to Stop-Question-and-Frisk (SQF) are vast. Second, racial bias in policing explains some of these disparities in exposure. Third, SQF (maybe) reduced crime. Finally, SQF has negative consequences for the education and health of minority youth. I conclude by outlining key challenges for future research on the social consequences and costs of law enforcement activity.

April 16, 2021

Automatic Curricula in Deep Multi-Agent Reinforcement Learning

Speaker: Thore Graepel, Google DeepMind

Multi-agent systems are emerging as a crucial element in our pursuit of designing and building intelligent systems. In order to succeed in the real world artificial agents must be able to cooperate, communicate, and reason about other agents’ beliefs, intentions and behaviours. Furthermore, as system designers we need to think about composing intelligent systems from intelligent subsystems, a multi-agent approach inspired by the observation that intelligent agents like organisations or governments are composed of other agents. Last but not least, as a product of evolution intelligence did not emerge in isolation, but as a group phenomenon. Hence, it seems plausible that learning agents require interaction with other agents to develop intelligence. 

In this talk, I will discuss the exciting role that deep multi-agent reinforcement learning can play in the design and training of intelligent agents. In particular, training RL agents in interaction with each other can lead to the emergence of an automatic learning curriculum: From the perspective of each learning agent, the evolving behaviours of the other learning agents constitute a challenging environment dynamics and pose ever evolving tasks. I will present three case studies of deep multi-agent RL with auto-curricula: i) Learning to play board games at master level with AlphaZero, ii) Learning to play the game of Capture-The-Flag in 3d environments, and iii) Learning to cooperate in social dilemmas.

April 9, 2021

The future of climate modeling in the age of artificial intelligence

Speaker: Laure Zanna, NYU

Numerical simulations used for weather and climate predictions solve approximations of the governing laws of fluid motions on a grid. Ultimately, uncertainties in climate predictions originate from the poor or lacking representation of processes, such as turbulence, clouds that are not resolved on the grid of global climate models. The representation of these unresolved processes has been a bottleneck in improving climate predictions. 

The explosion of climate data and the power of machine learning algorithms are suddenly offering new opportunities: can we deepen our understanding of these unresolved processes and simultaneously improve their representation in climate models to reduce climate projections uncertainty? 

In this talk, I will discuss the current state of climate modeling and projections and its future, focusing on the advantages and challenges of using machine learning for climate modeling. I will present some of our recent work in which we leverage tools from machine learning and deep learning to learn representations of unresolved processes and improve climate simulations. Our work suggests that machine learning could unlock the door to discovering new physics from data and enhance climate predictions.  

March 26, 2021

Quantum Materials Design Using Artificial Intelligence

Speaker: Trevor David Rhone, Rensselaer Polytechnic Institute

When the dimensionality of an electron system is reduced from three dimensions to two dimensions, new behavior emerges. This has been demonstrated in two-dimensional (2D) materials such as graphene – a single atomic layer of graphite – which was discovered in 2004. Many years later, in 2017, 2D materials with intrinsic magnetic order were discovered, giving rise to a new frontier in science exploration and industrial innovation. However many challenges in the search for new 2D magnetic materials exist. Some estimates place the number of materials that exist in nature as large as 10100. Is it possible to efficiently explore this vast chemical space in order to accelerate the discovery of 2D magnetic materials? Can we predict their properties?

March 5, 2021

Data science for social equality

Speaker: Emma Pierson, Microsoft Research

Our society remains profoundly unequal. This talk discusses how data science and machine learning can be used to combat inequality in healthcare and public health by presenting several vignettes about pain, COVID, and women's health.

February 26, 2021

Deep Learning and Computations of high-dimensional PDEs

Speaker: Siddhartha Mishra, ETH Zurich

Partial Differential Equations (PDEs) with very high-dimensional state and/or parameter spaces arise in a wide variety of contexts ranging from computational chemistry and finance to many-query problems in various areas of science and engineering. In this talk, we will survey recent results on the use of deep neural networks in computing these high-dimensional PDEs. We will focus on two different aspects i.e., the use of supervised deep learning, in the form of both standard deep neural networks as well as recently proposed DeepOnets, for efficient approximation of many-query PDEs and the use of physics informed neural-networks (PINNs) for the computation of forward and inverse problems for PDEs with very high-dimensional state spaces.

February 21, 2021

NotCo: Creating an AI that fixes the broken food industry

Speaker: Karim Pichara, NotCo

NotCo is a Chilean startup that has been disrupting the food industry in the last four years. It was founded by Matías Muchnick, Karim Pichara, and Pablo Zamora. Their mission is to take out the animal from the food equation, proposing delicious products traditionally made with animal ingredients, just by using plants. NotCo created Giuseppe, an AI that solves a problem that probably humans would never address alone: find a way to imitate animal-based products just by using plant-based ingredients. Giuseppe today integrates dozens of different foods and plant databases. He can produce plant-based recipes with mind-blowing ingredient combinations and continuously learn with chefs and food scientists' feedback. NotCo designs their AI by an interdisciplinary team that combines Machine Learning and Software engineers, Food Scientists, and Chefs. NotCo has invested in scientific machinery that screens all the plants and food ingredients to provide Giuseppe with a novel perspective from where to analyze food. Today, Giuseppe can assist food scientists with several tasks related to product creation and scaling. NotCo aims to cover most of the critical aspects of the food industry with AI soon. The company closed a US$85 million Series C round. Their current products are NotMilk, NotBurger, NotIceCream, and NotMayo. NotCo exists in several Latin American countries and entered the US market in November 2020 with the NotMilk in Whole Foods stores.

February 19, 2021

Autoencoders and causality in the Light of Drug Repurposing for COVID-19

Speaker: Caroline Ulher, MIT

Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. An exciting opportunity in this regard stems from the growing availability of perturbation / intervention data (genomics, advertisement, education, etc.). In order to obtain mechanistic insights from such data, a major challenge is the integration of different data modalities (video, audio, interventional, observational, etc.). Using genomics as an example, I will first discuss our recent work on coupling autoencoders to integrate and translate between data of very different modalities such as sequencing and imaging. I will then present a framework for integrating observational and interventional data for causal structure discovery and characterize the causal relationships that are identifiable from such data. We then provide a theoretical analysis of autoencoders linking overparameterization to memorization. In particular, I will characterize the implicit bias of overparameterized autoencoders and show that such networks trained using standard optimization methods implement associative memory. We end by demonstrating how these ideas can be applied for drug repurposing in the current COVID-19 crisis.

October 30, 2020

Start With Why: It's Not Just Good Leadership Advice, It's Good Data Practice!

Speaker: Jessica Stauth, Fidelity Labs

In 2009 author and motivational speaker Simon Sinek delivered the now-classic TED talk “Start with why”. Viewed by over 28 million people, “Start with Why” is the third most popular TED video of all time and it teaches us that great leaders and companies inspire us to take action by focusing on the WHY over the “what” or the “how”. In this talk we’ll ask how applied data and computational scientists can use the power of WHY to frame problems, inspire others, and give them answers to business questions they might never think of asking.

October 23, 2020

A Function Approximation Perspective on Sensory Representations

Speaker: Cengiz Pehlevan, Harvard SEAS

Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, Professor Pehlevan will describe his group’s efforts in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. He will present applications of his group's theories to deep learning and neuronal data.

October 16, 2020

Can Computers Create Art?

Speaker: Aaron Hertzmann, Adobe Inc.

In this talk, Dr. Hertzmann will discuss whether computers, using Artificial Intelligence (AI), can create art. His talk will cover the history of automation in art, examining the hype and reality of AI tools for art together with predictions about how they will be used. Dr. Hertzmann will also discuss different scenarios for how an algorithm could be considered the author of a piece of artwork, which, he argues, comes down to questions of why we create and appreciate artwork.

October 2, 2020

Physics-Informed Machine Learning in Astronomy

Speaker: Josh Bloom, UC Berkeley

While “off-the-shelf” ML has become pervasively used throughout astronomy inference workflows, there is an exciting new space emerging where novel learning algorithms and computational approaches are demanded and developed to address specific domain questions. After describing such efforts—in the search for Planet 9 new classes of variable sources—Dr. Bloom will turn his attention to new practical implementations and uses for generative models in astronomy. One application arises in the need to optimize telescope observing cadences, requiring the generation of physically plausible astronomical time-series. Bloom will present his team's approach to this using semi-supervised variational autoencoders where physical inputs are mapped to the (generative) latent space. He will also present a new architecture to exploit known symmetries in periodic variable star observations that yield in state-of-the-art classification results. Last, he will highlight work on a successful fast imaging artifact (cosmic rays) discovery and inpainting framework.

September 18, 2020

Understanding Deep Learning: Theoretical Building Blocks from The Study of Wide Networks

Speaker: Yasaman Bahri, Google Brain

Deep neural networks are a rich class of models now used across many domains, but our theoretical understanding of their learning and generalization is relatively less developed. A fruitful angle for investigation has been to study deep neural networks that are also wide (having many hidden units per layer), which has given rise to foundational connections between deep networks, kernel methods, and Gaussian processes. Dr. Bahri will briefly survey her past work in this area and then focus on recent work that sheds light on regimes not captured by existing theory. She will discuss how the choice of learning rate in gradient descent separates the dynamics of deep neural networks into two classes that are separated by a sharp phase transition as networks become wider. These two phases have distinct signatures that are predicted by a class of solvable models. Altogether these findings serve as building blocks for constructing a more complete, predictive theory of deep learning.

September 11, 2020

What Are Useful Uncertainties in Deep Learning and How Do We Get Them?

Speaker: Weiwei Pan, Harvard University

While deep learning has demonstrable success on many tasks, the point estimates provided by standard deep models can lead to overfitting and provide no uncertainty quantification on predictions. However, when models are applied to critical domains such as autonomous driving, precision health care, or criminal justice, reliable measurements of a model's predictive uncertainty may be as crucial as correctness of its predictions. At the same time, increasing attention in recent literature is being paid to separating sources of predictive uncertainty, with the goal of separating types of uncertainties reducible through additional data collection from those that represent stochasticity inherent in the data generation process. In this talk, Dr. Pan will examine a number of deep (Bayesian) models that promise to capture complex forms for predictive uncertainties. She will also examine metrics commonly used to such uncertainties. Her aim is to highlight strengths and limitations of the models as well as the metrics; she will discuss potential ways to improve both in meaningful ways for downstream tasks.

April 15, 2022
 

Peril, Prudence and Planning as Risk, Avoidance and Worry
 

Peter Dayan, Max Planck Institute for Biological Cybernetics, Tübingen
 

Risk occupies a central role in both the theory and practice of decision-making. Although it is deeply implicated in many conditions involving dysfunctional behavior and thought, modern theoretical approaches from economics and computer science to understanding and mitigating risk, in either one-shot or sequential settings, have yet to permeate fully the fields of neural reinforcement learning and computational psychiatry. Here we use one prominent approach, called conditional value-at-risk (CVaR), to examine two forms of time-consistent optimal risk-sensitive choice and optimal, risk-sensitive offline planning. We relate the former to both ajustified form of the gambler's fallacy and extremely risk-avoidant behavior resembling that observed in anxiety disorders. We relate the latter to worry and rumination.

April 8, 2022

Scientific software ecosystems and communities: Why we need them and how each of us can help them thrive

Lois Curfman, Argonne National Laboratory

Software is a cornerstone of long-term collaboration and scientific progress, but software complexity is increasing due to disruptive changes in computer architectures and the challenges of next-generation science. Thus, the high-performance computing (HPC) community has the unique opportunity to fundamentally change how scientific software is designed, developed, and sustained—embracing community collaboration toward scientific software ecosystems, while fostering a diverse HPC workforce who embody a broad range of skills and perspectives. This presentation will introduce work in the U.S. Exascale Computing Project, where a varied suite of scientific applications builds on programming models and runtimes, math libraries, data and visualization packages, and development tools that comprise the Extreme-scale Scientific Software Stack (E4S). The presentation will introduce crosscutting strategies that are increasing developer productivity and software sustainability, thereby mitigating technical risks by building a firmer foundation for reproducible, sustainable science. The presentation will also mention complementary community efforts and opportunities for involvement.

March 25, 2022

Angels and insects: On artificial and natural optimization

Petros Koumoutsakos, Harvard University

 

What are the working methods of nature and how do they differ from those of engineers? Technical solutions that are reminiscent of nature can be found in airplane wings, in velcro bindings and in microbots such as artificial bacterial flagella. However nature’s designs are soft and fuzzy, and are ephemeral, whereas engineers often opt for structures that outlast them. Koumoutsakos will argue for the path that bridges nature’s ways with those of engineers. This path calls for adapting natural algorithms of evolutions and learning in design principles for engineering constructs. Koumoutsakos will show how advances in computing have provided a boost in these efforts in the last decade and outline learning and optimization algorithms that aim to harness these capabilities for scientific applications.

 

March 4, 2022

The Quarks of Attention

Pierre Baldi, University of California, Irvine

Attention plays a fundamental role in both natural and artificial intelligence systems. In deep learning, several attention-based neural network architectures have been proposed to tackle problems in natural language processing (NLP) and beyond, including transformer architectures which currently achieve state-of-the-art performance in NLP tasks. In this presentation we will: 1) identify and classify the most fundamental building blocks (quarks) of attention, both within and beyond the standard model of deep learning; 2) identify how these building blocks are used in all current attention-based architectures, including transformers; 3) demonstrate how transformers can effectively be applied to new problems in physics, from particle physics to astronomy; and 4) present a mathematical theory of attention capacity where, paradoxically, one of the main tools in the proofs is itself an attention mechanism.

February 25, 2022

Data-sparse Linear Algebra Algorithms for Large-scale Applications on Emerging Architectures

David Keyes, King Abdullah University of Science and Technology (KAUST)

A traditional goal of algorithmic optimality, squeezing out flops, has been superseded by evolution in architecture. Flops no longer serve as a reasonable proxy for all aspects of complexity. Instead, algorithms must now squeeze memory, data transfers, and synchronizations, while extra flops on locally cached data represent only small costs in time and energy. Hierarchically low-rank matrices realize a rarely achieved combination of optimal storage complexity and high-computational intensity for a wide class of formally dense linear operators that arise in applications for which exascale computers are being constructed. They may be regarded as algebraic generalizations of the fast multipole method. Methods based on these hierarchical data structures and their simpler cousins, tile low-rank matrices, are well proportioned for early exascale computer architectures, which are provisioned for high processing power relative to memory capacity and memory bandwidth. They are ushering in a renaissance of computational linear algebra. A challenge is that emerging hardware architecture possesses hierarchies of its own that do not generally align with those of the algorithm. We describe modules of a software toolkit, hierarchical computations on many core architectures, that illustrate these features and are intended as building blocks of applications, such as matrix-free higher-order methods in optimization and large-scale spatial statistics. Some modules of this open-source project have been adopted in the software libraries of major vendors.

February 18, 2022

Learning physics-based models from data: Perspectives from model reduction

Karen Willcox, University of Texas at Austin, Oden Institute

 

Operator Inference is a method for learning predictive reduced-order models from data. The method targets the derivation of a reduced-order model of an expensive high-fidelity simulator that solves known governing equations. Rather than learn a generic approximation with weak enforcement of the physics, we learn low-dimensional operators whose structure is defined by the physical problem being modeled. These reduced operators are determined by solving a linear least squares problem, making Operator Inference scalable to high-dimensional problems. The method is entirely non-intrusive, meaning that it requires simulation snapshot data but does not require access to or modification of the high-fidelity source code. For problems where the complexity of the physics does not admit a global low-rank structure, we construct a nonlinear approximation space. This is achieved via clustering to obtain localized Operator Inference models, or by approximation in a quadratic manifold. The methodology is demonstrated on challenging large-scale problems arising in rocket combustion and materials phase-field applications.
 

December 10, 2021

Enabling Zero-Shot Generalization in AI4Science

Anima Anandkumar, California Institute of Technology

Many scientific applications heavily rely on the use of brute-force numerical methods performed on high-performance computing (HPC) infrastructure. Can artificial intelligence (AI) methods augment or even entirely replace these brute-force calculations to obtain significant speed-ups? Can we make groundbreaking new discoveries because of such speed-ups?  However, such AI4science often requires zero-shot generalization to entirely new scenarios not seen during training. I will present exciting recent advances that build new foundations in AI that are applicable to a wide range of problems such as fluid dynamics and quantum chemistry. On the other side of the coin, the use of simulations to train AI models can be very effective in applications such as robotics and autonomous driving. Thus, we will see a convergence of AI, Simulations and HPC in the coming years.
 

November 19, 2021

A large-scale analysis of police stops across the United States

Ravi Shroff, New York University

To assess racial disparities in police interactions with the public, we compiled and analyzed a dataset detailing nearly 100 million municipal and state patrol traffic stops conducted in dozens of jurisdictions across the country---the largest such effort to date. We analyzed these records in three steps. First, we measured potential bias in stop decisions by examining whether Black drivers are less likely to be stopped after sunset, when a "veil of darkness" masks one's race. Second, we investigated potential bias in decisions to search stopped drivers. Finally, we examined the effects of legalizing recreational marijuana on policing in Colorado and Washington state. We find evidence of bias against minority drivers in both stop and search decisions, and also that the bar for searching minority drivers remains lower than for white drivers after marijuana legalization.

November 12, 2021

We used RL; but did it work

Susan Murphy, Harvard University 

Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in a Digital Health. However after an reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? We discuss some first approaches to addressing these questions.

October 22, 2021

Generative Flow Networks

Yoshua Bengio, Université de Montréal

Generative Flow Networks (or GFlowNets) have been introduced as a method to sample a diverse set of candidates in an active learning context, with a training objective that makes them approximately sample in proportion to a given reward function. We show a number of additional theoretical properties of GFlowNets. They can be used to estimate joint probability distributions and corresponding marginal distributions (when some variables are unspecified) and are particularly interesting to represent distributions over composite objects like sets and graphs. They amortize in a single but trained generative pass the work typically done by computationally expensive MCMC methods. They can be used to estimate partition functions and free energies, conditional probabilities of supersets or of larger graphs (supergraphs) given a subset of an included subgraph, as well as marginal distributions over all supersets of a set or supergraphs of a graph. The talk will highlight the relations and differences to standard approaches in generative modeling and reinforcement learning and summarize early experimental results obtained in the context of exploring the space of molecules to discover ones with properties of interest.

October 6, 2021

Reliable Predictions? Counterfactual Predictions? Equitable Treatment? Some Recent Progress in Predictive Inference

Emmanuel Candés, Stanford University

Recent progress in machine learning provides us with many potentially effective tools to learn from datasets of ever increasing sizes and make useful predictions. How do we know that these tools can be trusted in critical and high-sensitivity systems? If a learning algorithm predicts the GPA of a prospective college applicant, what guarantees do I have concerning the accuracy of this prediction? How do we know that it is not biased against certain groups of applicants? This talk introduces statistical ideas to ensure that the learned models satisfy some crucial properties, especially reliability and fairness (in the sense that the models need to apply to individuals in an equitable manner). To achieve these important objectives, we shall not “open up the black box” and try understanding its underpinnings. Rather we discuss broad methodologies that can be wrapped around any black box to produce results that can be trusted and are equitable. We also show how our ideas can inform causal inference predictive; for instance, we will answer counterfactual predictive problems: i.e. predict the outcome of a treatment would have been given that the patient was actually not treated.

September 10, 2021

Machine Learning in Science: Applications, Algorithms and Architectures

Kathy Yelick, UC Berkeley

Machine learning is being used in nearly every discipline in science, from biology and environmental science to chemistry, cosmology and particle physics. Scientific data sets continue to grow exponentially due to improvements in detectors, accelerators, imaging, and sequencing as well as networks of embedded sensors and personal devices. In some domains, large data sets are being constructed, curated, and shared with the scientific community and data may be reused for multiple problems using emerging algorithms and tools for new insights. Machine learning adds a powerful set of techniques to the scientific toolbox, used to analyze complex, high-dimensional data, automate and control experiments, approximate expensive experiments, and augment physical models with models learned from data.

On the systems side, scientists have always demanded some of the fastest computers for large and complex simulations and more recently for high throughput simulations that produce databases of annotated materials and more. Now the desire to train machine learning models on scientific data sets and for robotics, speech and vision, has created a new set of users and demands for high end computing. The changing architectural landscape has increased node level parallelism, added new forms of hardware specialization, and continued the ever-growing gap between the cost of computation and data movement at all levels. These changes are being reflected in both commercial clouds and HPC facilities—including upcoming exascale facilities—and also placing new requirements on scientific applications, whether they are performing physics-based simulations, traditional data analytics, or machine learning. Using examples from my own research in bioinformatics for the microbiome, I will describe some of the algorithmic challenges and how these machines are being used to deliver new scientific capabilities.