Celebrating 100 Years of Markov Chains

Wednesday, Jan. 23
9:15 am-12:15 pm
Maxwelll Dworkin G115, 33 Oxford Street (note room change)
Cambridge MA 02138

 A. A. Markov, 1886

On Jan. 23, 1913, A. A. Markov demonstrated the application of the Markov chain, a technique central to 21st-century computational science, in a lecture at the Imperial Academy of Sciences in St. Petersburg with a remarkable analysis of the letters in Pushkin's Eugene Onegin.

We celebrate the 100th anniversary of this seminal research—which in turn was published in celebration of the bicentennial of the law of large numbers—with a trio of ComputeFest talks. This event is organized by Brian Hayes, Senior Writer at American Scientist magazine and an Associate of SEAS.

Open to the public; no registration required.


Brian Hayes
Senior Writer, American Scientist; Associate of SEAS

First Links in the Markov Chain: Poetry and Probability

On January 23, 1913, the Russian mathematician A. A. Markov presented a paper at the Imperial Academy of Sciences in St. Petersburg describing his careful enumerations of vowels and consonants in Alexander Pushkin's poem Eugene Onegin. A century later, the techniques that Markov discussed that day are in daily use everywhere in statistics and scientific computing. We call them Markov chains. On the occasion of this hundredth anniversary I want to consider what motivated Markov's work in this area, and how he came to illustrate his mathematical ideas with an analysis of poetic language. And I go on to discuss a few latter-day applications of Markov chains in linguistics, including the mass production of random prose meant to foil your spam filter.

Ryan Prescott Adams

Assistant Professor of Computer Science, SEAS

From Markov to Pearl: Conditional Independence as a Driving Principle for Probabilistic Modeling

The Markov chain is one of the fundamental abstractions for consideration of stochastic systems. The remarkable insight of Markov was that complex phenomena can be described by the evolution of a "memoryless" system. Markov chain theory has had an enormous impact on probabilistic computation, natural language processing, and information theory, among many other fields. In recent decades, Judea Pearl and others recognized that this notion of "conditional independence" could be used more broadly to define rich classes of probability distributions for complex natural phenomena. I will give an overview of how these ideas connect strongly with graph theory, leading to the concept of a probabilistic graphical model, a centerpiece of modern machine learning and statistics.

Pavlos Protopapas

Research Associate, Harvard-Smithsonian Center for Astrophysics; Lecturer in Computational Science, SEAS

Applications of Markov Chains in Science

The Markov chain and its extension, Markov Chain Monte Carlo, are among the most used algorithms in the sciences. These methods have transformed the way we do science in the last twenty years. In the first part of this talk I will introduce the basic ideas behind these methods and demonstrate them with simple examples: a drunk walking along a pavement, a mutating virus, card shuffling and so on. I will then review applications of Markov chains in various sciences, describing the latest developments and looking toward how Markov's insight might continue to shape computational science in the future.

Session Chair: Brian Hayes

Download event poster:

markov_poster.pdf484.04 KB