IACS seminars are generally held every other Friday at lunchtime during the academic year. Students, faculty and others interested in computational science and applied computation are welcome to attend. See the calendar page for ways to find out about future seminars.
Fall 2013 IACS Seminar Series
September 13, 2013
John Hasbrouck Van Vleck Professor of Pure and Applied Physics
Using Computation to Diagnose and Predict Heart Disease
The patterns of blood flow in arteries are crucial in determining the onset and progression of heart disease. These patterns can only be captured by simulations, assuming that the important details at different scales are properly described. This presentation will give an overview of our efforts to construct multiscale models of arterial blood flow based on the lattice Boltzmann equation.
September 27, 2013
Senior Principal Engineer and Program Leader for Materials Design, Design and Technology Group, Intel Corp.; IACS Distinguished Scientist in Residence
Prediction, Renaissance, and Cognition - 3 Questions for Computing
With the increasing power of computing, humans appear to be on the verge of a golden era in use of computing to address problems in all areas including energy, health, and information. Extrapolating the ever-increasing efficacy of hardware and software, it appears that we are moving towards being totally predictive and even exceeding the computing power of the brain. Based on our work on several aspects of modeling covering areas of chemistry and materials science, we will address the feasibility of such a vision and look back to history and renaissance to distill the lessons for the future of computing. In this journey, we hope to take you back to the future in which prediction has been one of the most sought after goals for humans.
October 11, 2013
Dmitri "Mitya" Chklovskii
Janelia Lab Head, Howard Hughes Medical Institute
How the Brain Handles Big Data: Online Algorithms in Neurons
Our brains constantly handle big data streamed by our sensory organs. Yet, how this is done in neurons, elementary building blocks of the brain, is not understood. We propose to view a neuron as a signal processing device representing its high-dimensional input by a synaptic weight vector scaled by its output. A neuron accomplishes this task by running two online algorithms: a slow algorithm which adjusts synaptic weights to extract the most non-Gaussian projection of the high-dimensional input, and a fast algorithm which estimates the projection amplitude. Both online algorithms rely on sparsity-inducing regularizers and have provable regret bounds. The steps of these algorithms account for the salient physiological features of neurons such as leaky integration, non-linear output function, Hebbian synaptic plasticity rules, and sparse connectivity and activity. Thus, our work should help model biological neural circuits and develop biologically inspired computing.
October 25, 2013
Director of Data Science, Institute for Quantitative Social Science, Harvard
10 Simple Rules for the Care and Feeding of Scientific Data
Increasingly, scientific publications and claims are based on ever-increasing volumes of data. Once the publication is complete, it is often difficult for others to locate the data and accompanying analyses, and once located, often challenging to make sense of them. For scientific results to continue being subject to verification and extension, we in the scientific community must ensure that good data management, with sufficient transparency and accessibility of data and analyses, become essential and ordinary elements of the research cycle. In this paper, we present 10 simple rules to help scientists towards this goal.
November 8, 2013
Director, Analog Devices Lyric Labs
Probabilistic Programming and Probability Processing
We are developing a computing stack for Bayesian inference and machine learning, including integrated circuits, probabilistic programming languages, compilers, and applications. Our first probability processor hardware demonstrates orders of magnitude wins on machine learning and statistical inference benchmarks. We are developing open-source probability programming languages that help enable rapid prototyping and development of statistical machine learning applications. We will demonstrate some applications that we are building on top of the probability processing stack.
November 22, 2013
Assistant Professor of Computer Science, Universite de Sherbrooke, Canada
Deep Learning for Distribution Estimation
Deep learning methods attempt to learn a deep and distributed representation of data directly from its low-level representation. The motivating argument is that high-dimensional data in AI-related domains (speech, computer vision, natural language) can take a more meaningful representation as a decomposition into several layers of abstractions, decomposing its different factors of variation. Deep learning methods thus try to discover and learn this representation directly from data.