NIPS*2014 Workshop

From Probabilistic Programming

Revision as of 23:00, 9 November 2014 by Daniel Roy (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Funded in part by


Saturday, December 13, 2014
Montreal, Quebec, Canada
NIPS*2014 Conference


  Vikash Mansinghka (MIT)
  Daniel Roy (Toronto)
  Thomas Dietterich (Oregon State)
  Stuart Russell (Berkeley)
  Joshua Tenenbaum (MIT)

Please pre-register for the workshop.


Probabilistic models and approximate inference algorithms have become widely-used tools, central to fields ranging from cosmology to robotics to genetics. However, even simple variations on models and algorithms from the standard machine learning and statistics toolkits can be difficult and time-consuming to design, specify, analyze, implement, optimize and debug. Due to these challenges, integrated, fully probabilistic approaches to fundamental AI problems can be impractical. Probabilistic programming aims to address these challenges by developing formal languages and software systems that integrate key ideas from probabilistic modeling and inference with programming languages and Turing-universal computation.

The field of probabilistic programming has seen rapid growth and progress over the last two years. Several languages and open-source implementations are now mature enough to support real-world applications, especially in data analysis. Many new probabilistic programming languages have been developed; most of these are domain-specific, but some aim to be general-purpose. Formal connections to computable analysis, measure theory, and computational complexity are emerging, along with new AI architectures that make use of the representational flexibility that probabilistic programs and probabilistic programming systems provide. New problems have also emerged. There is a widespread need for software tools that implement mathematically rigorous approaches to profiling, testing, verifying and debugging probabilistic programs, and for high-quality libraries of models and inference techniques.

The 3rd NIPS Workshop on Probabilistic Programming will survey recent progress, including results from the ongoing DARPA PPAML program. A key theme will be articulating formal connections between probabilistic programming and other fields central to the NIPS community.



We have created a Google groups mailing list called 3rd-nips-workshop-on-probabilistic-programming. In order to prevent spam, we will manually OK requests to join.


  • OCT 24 - Accepting submissions (for early decisions see ^^^^ below)
  • NOV  7 - Extended abstracts due (and NIPS early registration deadline!)
  • NOV 10 - Notification of acceptance sent
  • DEC 13 - Workshop

^^^^ On request, decisions for submissions received between October 24 and November 3 will be made within 72 hours, to facilitate travel planning and early registration.


Session 1

  • 8:30 - 8:40 Opening remarks and workshop overview
  • 8:45 - 9:15 Keynote: Kathleen Fisher, "What everyone in ML ought to know about PL
  • 9:15 - 9:35 Talk: Andy Gordon, "Tabular: Probabilistic Programs as Spreadsheet Queries"
  • 9:35 - 10:00 Talk: Vikash Mansinghka, "Venture: a general-purpose probabilistic programming platform"

10:00 - 10:30 Coffee break

Session 2

  • 10:30 - 11:15 Poster spotlights (in order of presentation)
    • Towards differentially private probabilistic programming
    • BFiT: From Possible-World Semantics to Random-Evaluation Semantics in Open Universe
    • Stan: A Platform for Bayesian Inference
    • Fully Automatic Variational Inference of Differentiable Probability Models
    • BUGS on the brain: Using probabilistic programming for modeling human perception
    • Probabilistic Constraint Programming
    • Probabilistic programming by abstraction refinement
    • Learning Probabilistic Programs
    • Probabilistic Programming for Malware Analysis
    • WOLFE: Strength Reduction and Approximate Programming for Probabilistic Programming
    • BQL and BayesDB: a probabilistic DSL and runtime system for data analysis and predictive analytics
    • bnpy: Reliable and scalable variational inference for Bayesian nonparametric models
    • CoreVenture: a high-level machine language for probabilistic programming
  • 11:15 - 12:00 Poster session #1
  • 12:00 - 12:45 Keynote: Christian Robert, "Approximate Bayesian Computing (ABC) for Model Choice: from Statistical Sufficiency to Machine Learning"

12:45 - 14:00 Group lunch sponsored by Microsoft Research

Session 3

  • 14:00 - 14:30 Talk: Avi Pfeffer, "Practical Probabilistic Programming with Figaro"
  • 14:35 - 14:55 Contributed talk: Dan Ritchie, "Quicksand: A Lightweight Implementation of Probabilistic Programming for Procedural Modeling and Design"
  • 15:00 - 16:30 Poster session #2

16:30 - 17:00 Coffee Break

Session 4

  • 17:00 - 17:20 Contributed talk: Jan Willem van de Meent, “Particle Gibbs with Ancestor Resampling for Probabilistic Programs”
  • 17:25 - 17:50 Talk: Daniel Roy, "Exchangeable Random Primitives, Conditional Independence, and Probably Approximately Correct Probabilistic Programs"
  • 17:50 - 18:00 Closing remarks
  • 18:00 - 18:30 Open discussion


The following abstracts are presented in alphabetical order:

  • A combinator library for MCMC sampling
    Praveen Narayanan, Chung-chieh Shan
  • Adaptive Scheduling in MCMC and Probabilistic Programming
    David Tolpin, Jan Willem van de Meent, Brooks Paige, Frank Wood
  • Algebraic Type Semantics for Hierarchical Model Architectures
    Eric Mjolsness
  • BQL and BayesDB: a probabilistic DSL and runtime system for data analysis and predictive analytics
    Vikash K. Mansinghka, Patrick Shafto, Jay Baxter, Baxter S. Eaves Jr.
  • BFiT: From Possible-World Semantics to Random-Evaluation Semantics in Open Universe
    Yi Wu, Lei Li, Stuart Russell
  • bnpy : Reliable and scalable variational inference for Bayesian nonparametric models
    Michael C. Hughes, Erik B. Sudderth
  • BUGS on the brain: Using probabilistic programming for modeling human perception
    Ulrik R. Beierholm
  • Composable gradient-based inference for higher-order probabilistic programming languages
    Vikash K. Mansinghka, Alexey Radul
  • CoreVenture: a high-level machine language for probabilistic programming
    Vikash K. Mansinghka, Alexey Radul
  • Designing an IDE for Probabilistic Programming: Challenges and a Prototype
    Sameer Singh, Sebastian Riedel, Luke Hewitt, Tim Rocktaeschel
  • Fully Automatic Variational Inference of Differentiable Probability Models
    Alp Kucukelbir, Rajesh Ranganath, Andrew Gelman, David Blei
  • GPGP3D: Generative Probabilistic Graphics Programming for 3D Shape Perception
    Tejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, Vikash K. Mansinghka
  • Tools for debugging and optimizing probabilistic programs
    Vlad Firoiu, Benjamin Zinberg, Alexey Radul, Vikash K. Mansinghka
  • Learning Probabilistic Programs
    Yura N. Perov, Frank Wood
  • Particle Gibbs with Ancestor Resampling for Probabilistic Programs
    Jan Willem van de Meent, Hongseok Yang, Frank Wood
  • Practical Probabilistic Programming with Figaro
    Avi Pfeffer, Brian Ruttenberg, Michael Howard, Glenn Takata, Joe Gorman, Alison O’Connor
  • Probabilistic Constraint Programming
    Vijay Saraswat, Vineet Gupta, Radha Jagadeesan, Prakash Panangaden, Doina Precup, Francesca Rossi, Prithviraj Sen
  • Probabilistic Programming by Abstraction Refinement
    Zenna Tavares, Armando Solar Lezama
  • Probabilistic Programming for Malware Analysis
    Brian Ruttenberg, Lee Kellogg, Avi Pfeffer
  • Quicksand: A Lightweight Implementation of Probabilistic Programming for Procedural Modeling and Design
    Daniel Ritchie
  • Stan: A Platform for Bayesian Inference
    Daniel Lee, Bob Carpenter, Peter Li, Michael Betancourt, Andrew Gelman
  • Towards differentially private probabilistic programming
    Gilles Barthe, Gian Pietro Farina, Marco Gaboardi, Emilio Gallego Jesús Arias, Andrew D. Gordon, Justin Hsu, Aaron Roth, Pierre-Yves Strub
  • Unsupervised learning of probabilistic programs with latent predicate networks
    Eyal Dechter, Joshua Rule, Joshua B. Tenenbaum
  • VentureScript: a language for probabilistic modeling and interactive inference
    Vikash K. Mansinghka, Alexey Radul
  • WOLFE: Strength Reduction and Approximate Programming for Probabilistic Programming
    Sebastian Riedel, Sameer Singh, Vivek Srikumar, Tim Rocktaeschel, Larysa Visengeriyeva, Jan Noessner
  • Writing Customized Proposals for Probabilistic Programs as Probabilistic Programs
    Hongyi Zhang, Vikash Mansinghka


See the original NIPS*2014 Workshop/Call for Contributions here.

We are seeking three types of contributions:

  1. Research Abstracts --- Original research in probabilistic programming methodology and/or its applications. All aspects of probabilistic programming are appropriate, including theory, language design, inference, systems considerations, and applications.
  2. Language/System Descriptions --- Descriptions of languages and systems under active research and development. Abstracts should explain the intended coverage in terms of the models, datasets, queries, inference strategies and representative applications that are supported by the language. Distinctive features of the language design and system architecture are also of interest. All abstracts must include code and example outputs for at least two probabilistic programs.
  3. Challenge Problems --- Suggestions for challenge problems that the probabilistic programming community should consider. Application suggestions should introduce the problem, link to publicly available domain knowledge and/or data, suggest relevant modeling idioms and inference strategies, describe the current state-of-the-art, and characterize the potential impact if the problem is solved (ideally given multiple quantitatively specified levels of computational and inferential performance). Descriptions of fundamental research challenges that have arisen or are likely to arise are also of interest, especially if the challenge and/or the likely solutions involve connections to other fields.

Submission guidelines

Submissions should sent by email to In order to aid processing, the email subject line should contain the word "submission", as well as the following keywords:

  • exactly one of "research", "description", or "challenge" based on the type of submission;
  • "talk", if and only if the authors would like the abstract considered for a contributed talk in addition to a poster; and
  • “early decision”, if and only if the authors need to hear back within 72 hours concerning acceptance for registration/planning purposes.

The body of the email should include

  • a title,
  • a list of authors and emails, and
  • a PDF attachment in the NIPS LaTeX style.

Submissions should be ~3 pages + references, and they will be reviewed for correctness, clarity, relevance, and, in the case of research submissions, novelty. An optional questionnaire URL will be released in November for Language/System and Challenge Problem submissions. Accepted contributions will be made available shortly before the workshop, and will be linked online with the authors’ permission.


Vikash Mansinghka is a postdoctoral research scientist with MIT's Computer Science and Artificial Intelligence Laboratory and Department of Brain & Cognitive Sciences, where he leads the Probabilistic Computing Project. Vikash received an SB in Mathematics, an SB in Computer Science, an MEng in Computer Science, and a PhD in Computation, all from MIT, holding graduate fellowships from the National Science Foundation and MIT's Lincoln Laboratory. His PhD dissertation on natively probabilistic computation won the 2009 MIT George M. Sprowls dissertation award in computer science. He co-founded a venture-backed startup based on his research that was acquired by in 2012. He served on DARPA's Information Science and Technology advisory board from 2010-2012, and currently serves on the editorial boards for the Journal of Machine Learning Research and the journal Statistics and Computation.

Daniel Roy is an Assistant Professor of Statistics at the University of Toronto. Roy earned an SB and MEng in Electrical Engineering and Computer Science, and a PhD in Computer Science, from MIT. His dissertation on probabilistic programming received the department's George M Sprowls Thesis Award. Subsequently, he held a Newton International Fellowship of the Royal Society, hosted by the Machine Learning Group at the University of Cambridge, and then held a Research Fellowship at Emmanuel College. Roy's research focuses on theoretical questions that mix computer science, statistics, and probability.

Thomas Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor and Director of Intelligent Systems in the School of Electrical Engineering and Computer Science at Oregon State University. He also serves as President of the Association for the Advancement of Artificial Intelligence. Dietterich is responsible for defining “Challenge Problems” for probabilistic programming systems as part of the DARPA PPAML program. In his own research, he has primarily pursued non-probabilistic, non-parametric machine learning methods. However, more recently, he has been exploring latent-variable probabilistic models in ecological science. Hence, he combines an outsider’s skepticism with an insider’s optimism about the probabilistic programming effort.

Stuart Russell is a Professor of Computer Science and Smith-Zadeh Professor in Engineering, at the University of California, Berkeley. He received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco. In 1990, he received the Presidential Young Investigator Award of the National Science Foundation, and in 1995 he was cowinner of the Computers and Thought Award. In 1998, he gave the Forsythe Memorial Lectures at Stanford University and in 2005 he received the ACM Karlstrom Outstanding Educator Award. From 2012 to 2014 he held the Chaire Blaise Pascal in Paris. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. He has published over 150 papers on a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and global seismic monitoring. His books include "The Use of Knowledge in Analogy and Induction" (Pitman, 1989), "Do the Right Thing: Studies in Limited Rationality" (with Eric Wefald, MIT Press, 1991), and "Artificial Intelligence: A Modern Approach" (with Peter Norvig, Prentice Hall, 1995, 2003, 2010).

Joshua Tenenbaum studies learning, reasoning and perception in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. His current work focuses on building probabilistic models to explain how people come to be able to learn new concepts from very sparse data, how we learn to learn, and the nature and origins of people's intuitive theories about the physical and social worlds. He is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, and is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his Ph.D. from MIT in 1999, and was a member of the Stanford University faculty in Psychology and (by courtesy) Computer Science from 1999 to 2002. His papers have received awards at numerous conferences, including CVPR (the IEEE Computer Vision and Pattern Recognition conference), ICDL (the International Conference on Learning and Development), NIPS, UAI, IJCAI and the Annual Conference of the Cognitive Science Society. He is the recipient of early career awards from the Society for Mathematical Psychology (2005), the Society of Experimental Psychologists, and the American Psychological Association (2008), and the Troland Research Award from the National Academy of Sciences (2011).

The organizer may be contacted via .


We are grateful to Microsoft Research Cambridge for generously offering to help fund the workshop.

Personal tools